Cloud Gaming: Can inventive mathematics overcome infrastructure woes?

Shortly ensuing yesterdays’ post on current-gen console lifespans, Microsoft published the findings of a research project named DeLorean. Contained within, exist a set of algorithms and computations to proliferate the possibility of cloud gaming to a realistic point.

According to WPCentral’s Sam Sabri,

“Most gamers deem the responsiveness of their game unacceptable when the latency in exceeds the 100ms threshold. Something that isn’t that uncommon with most cellular and Wi-Fi networks.”

While this certainly holds true in multiplayer situations, the latency being referenced is external data. Input from a controller is a data point previously unaffected by network latency due to the direct connection between controller and machine. However with cloud gaming, controller inputs are being sent to a machine over internet wires, switches, and routers spanning several thousand miles. The computation occurs there and the video stream is sent back along the same path. Typical latency from Comcast Miami to the nearest Microsoft server farm is about 50 ms. Remember that the video stream also has to come back along that same path, adding another 50 ms, combining for 100 ms of delay. 1/10 of a second hardly seems like much time, but in a world where games render at 60 frames per second, the difference is quantifiable.

DeLorean will attempt to overcome these challenges by using some very sophisticated mathematics to speculate and predict what the person behind the controller will do. DeLorean’s equations explained in the research paper are above the comprehension of society at large, but it would appear that the following researchers and institutions do.

Kyungmin Lee, David Chu, Eduardo Cuervo, Johannes Kopf, Sergey Grizan, Alec Wolman, & Jason Flinn

Univeristy of Michigan, Microsoft Research, Siberian Federal University

Whether DeLorean’s theoretical models will hold up across a vast and varied broadband infrastructure remain to be seen. Cloud gaming has tried before and failed miserably due to infrastructure woes. OnLive launched was founded in 2003 and released to market in 2009. The service was widely panned as a laughable attempt to create a new market. Press demos were conducted very close to OnLive’s data center and real-use scenarios did not hold up nationwide.

But it’s currently 2014 and much has changed. Average internet speeds are faster. Pipes have been upgraded to carry higher capacities. Heck, even Netflix and YouTube are streaming content in 4K. And game companies are ready to take the tumble once more. Sony has already built out a cloud gaming platform called Playstation Now. Early reviews of the beta indicate acceptable performance on industrial fiber connections. Other reviewers on more basic residential connections echo discontent with responsiveness.

Theoretically, DeLorean seems like a great idea. But even with the inclusion of these advanced algorithms, latency issues tied to cloud gaming will continue to plague it’s relevance until broadband infrastructure is upgraded. 

Source Microsoft Research via Neowin via WPCentral

Advertisements

A shorter lifespan for this generation of game consoles

This generation of console war has already been decided. When Sony and Microsoft both displayed their gaming consoles at E3 2013, it was easy to observe both jarringly different approaches. Microsoft was clearly aiming to evolve the living room into an automated hub. This new living-room relied on a freshly installed data center infrastructure to create a continuous flow of information between the people and their technology. It was and still is, a revolutionary vision of what is possible in the always connected world. Sony’s focus was on evolving what worked with game consoles in years passed. Perhaps, such an approach can be insightful to Japanese thinking. Japan’s tradition-based society is well documented in both literature and film. Both machines use very similar components from AMD in order to make cross platform development easier for third party developers. Despite nearly identical architectures, Sony’s PS4 includes one differentiator for relative purposes of game performance, the inclusion of DDR5 memory.

When game consoles process information, that information goes through a series of systems designed to deliver the information most effectively. Some steps in this system pass the information off to another step more quickly. One step in this system is DDR memory. The PS4 uses DDR5 memory which transfers at a maximum rate of 176 GB/s. Xbox One uses DDR3 at a maximum transfer rate of 68.3 GB/s. The Xbox also includes another very small buffer step (0.5 percent of total memory) which has theoretical transfer rates of up to 191 GB/s. This design choice is also insightful. It can be inferred that Microsoft’s mindset emphasizes the efficient transference of information between systems. Performance wise, DDR5 has a stilted leg up on DDR3.

Both companies have unique goals to accomplish with this generation of hardware. Japan’s Sony intends to upgrade the same experience we’ve known since last generation. America’s Microsoft ambitions lied with introducing a wholly new experience. Microsoft has already retracted from many decisions with this console based on consumer backlash. They have struggled with the consumer segment for several years and wanted to publicly grant consumers’ wishes. The problem here was the underlying philosophy with the Xbox One was never geared towards providing the consumer with best experience.

Xbox One was purposefully planned to help Microsoft flesh out new infrastructure and train programmers, who happen to be working on games. The idea of incorporating cloud systems into areas of gaming such as AI, lighting, or geospatial deployments can find roots in “doing more with less”, pooling resources, and good logistical flow of information. The consumer was a secondary consideration here, and money which could have spent on DDR5 memory was used instead to build a few additional servers in Microsoft’s new data centers. The noticeable differences in memory performance are minimal at the moment. But as game developers better harness the power of these new systems, the disparity in quality between games will become more easily discernible. The effects of this will likely be a shorter hardware generation for Microsoft.

Rather than the 8 years between the 360 and Xbox One, a lessened timeframe before the arrival of Microsoft’s next living room hub should be expected. In five years, broadband speeds will have doubled or tripled & UHD televisions will be mainstream. A demand for a console which can take advantage of these leaps in technology will echo consumer forums from the US to Japan. Microsoft will have exhausted less resources than Sony by this time and would have a more viable justification to build anew. Also accountable to decisions this go-round will be better trained developers and the infrastructure to support cloud-based computing for a decade or two out. Sony will retain good brand image amongst consumers but will have done little else in the way of future proofing the company’s other interests or better training programmers for other industries.

Microsoft extends collaborative editing with “Matter Center”

News travels fast but it would appear that technology travels faster.

Today, Microsoft announced their intention to bring real-time collaborative editing to their more traditional client software. Pegged for legal professionals, the aptly named “Matter Center” would port the functionality of what Web Apps now offer into desktop versions of Word, Excel, and PowerPoint. The effects would be a much more natural feel to tag-teaming your way through an immense brief.

EDIT: It would appear that simultaneous collaborative editing is still limited to Office Online Web Apps.

Other features included with this Office extension include 1 TB of cloud storage, more robust search functionality across entire portfolios, and enhanced security or permissions. Many of the slated features are already available through the more economical Office 365 Home subscription.

A pilot program will likely start within the coming months. Applications can be submitted here.

Should I be accepted into the test program, and if the terms allow for it, a more comprehensive review will be posted on this blog at some point.

Source Microsoft Press Release via WMPoweruser

Image Pakistan’s Supreme Court

Microsoft Office 365 real-time: live simultaneous collaborative editing

Available for enterprises and server-based setups since 2010, Office Web Apps have enabled real-time co-authoring of Office documents. What that means is that digital collaboration is no longer a huddled contortionistic dance of pointing & rocking. The document being worked on can be opened, accessed, and edited simultaneously by two or more persons on different devices. Each user has a uniquely colored cursor and changes are reflected in real-time. Such functionality enables increased productivity and comfort while collaborating. The implications of additional sets of eyes are clearly positive. Proofreading can be minimized or even eliminated. Necessary redactions can be applied without the redacted issue being elaborated upon later in the document.

Much like military technology, enterprise software can eventually trickle down into mainstream use. Microsoft did just that in November 2013 by releasing this cooperative capability to subscribers of Office 365. This cloud-based solution is nearly identical to the standard client-based software in every way with one exception… it runs inside the web-browser. This benefits users who lack a centralized server to host the editing session (the vast majority). The “Ribbon” user interface is immediately discernible to anyone who has used Microsoft Office 2007 or later. Some advanced functions have been stripped but fret not. For the rare scenario when an advanced function is required, the document can be saved and opened up in the traditional software.

Collaborative sessions are initiated when an invitation to share is sent. The recipient of the invitation can then join in and the process begins. A Skype session can also be opened for communications. At times, the web app can seem unpolished, but it is mostly fine. In comparison to its older and more robust cousin, traditional client software, it has some catching up to do. Features take time to add in, and polishing is a continuing process. Other options do exist such as Google Apps, but they lack the standardization that Microsoft Office has come to represent. There exist several additional options, but for the hundred millions of users who are comfortable in the Microsoft Office UI, Office 365 Web Apps is the ideal option for live collaborative editing.

365 invite 365 type 2365 type 

Municipal fiber deployment: create jobs, ignore consumer benefits

Fiber optics are the fastest form of transmitting data. Every scientist in the universe would acknowledge this as truth. There is no form of matter that can physically move faster than light. Fiber optics use light as a transmission medium & flexible glass fibers as the containment for the light. Even though the speed of light holds the cosmos’ best record, the implementation of light as a means to communicate is considered technology.

Like all technologies, it’s susceptible to improvement. Fiber optics have been implemented in communications for several decades and have seen various enhancements across several prongs including speed, distance, and effectiveness of glass insulation. Below are some speed marks that have been hit over the years:

1975 45 Mbps

1987 1.7 Gbps

2001 10 Tbps

2006 14 Tbps

2012 1,020 Tbps

The telecommunications industry has been using optical fiber cable for some time. In fact, much of the infrastructure that connects the internet today is comprised of fiber optical cables laid at various points in time. Continents, countries, states, and cities all connect to one another using fiber optical cable. It is in the “last mile”, a term commonly used to describe the lines running to individual house, that the majority of the infrastructure remains copper. The minority here is large scale businesses and industries that require large amount of bandwidth. Schools, hospitals, government agencies, and other big businesses reach agreements with telecommunication agencies to have dedicated fiber optics run to their facilities. What benefit would putting fiber optics in “last mile” infrastructure serve?

For many of us, there are several things that could be accomplished with such a speed upgrade. For starters, Netflix would be faster. Or if paid subscriptions aren’t your cup of tea, torrents would download more quickly. Entertainment consumption would be rid of wait times. Hosting ones’ own cloud or server to access files from anywhere is another possibility that comes to mind. Then there’s the transmission of UHD or 4K content. This new video format is the evolution of HD and pack 4 times the information into a video stream. That also quadruples the amount of data each video stream contains. Aside from entertainment applications, the need for gigabit speeds is largely superficial. Social networks, email, web-browsing, & gaming can run optimally at 25 Mbps.

Laying down lines in neighborhoods is extremely laborious. In modern human history, this type of municipal renovation was first performed in cities to make drinking water and sewage accessible. Then again for electricity, telephone connections, and cable television in that order. The logistics of undertaking such an upgrade were immeasurably comprehensive then. Now that current cities have grown, the task is even greater. The benefits most consumers will see from having fiber lines run to their homes can be widely spurned. However, there exists a responsibility by the government and American telcos to acknowledge the detriment of employment within the United States.

Mammoth municipal projects like the building of infrastructural improvements are the type of thing that create jobs in the thousands. While it’s a huge investment in technology, large telcos are recording huge profits. It’s time they started investing back into American communities by creating more jobs & modernizing infrastructure. Until they do so, the federal government needs to revoke all subsidies. The public has already been made aware the government is collecting data. What are the telcos going do, blackmail the US government with that information? Too late…

1366 x 768: No rhymes, several reasons

For 7, 6, or 8 years now, the maximum resolution of LCD displays on Windows’ notebooks has been laterally anti-competitive at a meager 1366 x 768p. No data has been made available to the public as to the reasoning behind the industry-wide decision to apply such a sub-standard component for use by the masses. Since no data is readily available, any hypotheses given or inferences made are purely speculative.

BUILT-IN OBSOLESCENCE

What is built-in obsolescence? The simplest way would be think of this as an engineering practice which ensures a product will be rendered obsolete.

A classic film, Willy Wonka and the Chocolate Factory, exemplifies the topic well. Most anyone who’s seen this film remembers the fictional “everlasting gobstopper”, a piece of candy that lasts forever. Willy Wonka was so concerned that old Slugworth would apprehended the secret. But what would old Slugworth, the archetype for the corporate and industrious man, do with such an invention? The analogy kind of breaks down here. In the world of candy, there are a seemingly limitless combination of flavors, colors, textures, and forms that sugar can be molded into, sold, and enjoyed. Due to the candies’ massive catalog, other treats would continue selling despite the fact a gobstopper can last forever.

For computing technology, there once was one option, the desktop computer. That was about it for 15 years. Then came the portable notebook. Again, that was about it for another 15 years. Only two options for 30 years? A generous ascription of that situation would be, slim pickings.

However, in 2007, a burst of innovative devices and designs were pushed into existence. Apple ushered in the smartphone. Tablets followed in 2010. Hybrid machines ensued shortly thereafter in 2012. Followed by a new OS (Chrome) in 2013. At this point, there exists a fairly healthy amount of options for various levels of personal computing needs. So why are Windows machines still being relegated to antiquated screen technology from the early 00s? Durability and ownership of data.

Windows notebooks tend to have a sturdier construction, are more easily repairable, and have long life cycles. Despite leaps in processor and memory technology, many people can accomplish what they need to with processor technology from the late 00s. Batteries can be replaced when they expire. HDDs can be switched out if they go bad. The OS can be reinstalled if a virus gets the machine. The one limiting factor which can be artificially imposed by notebook manufacturers is the quality of the screen. By putting inadequate screens inside their machines, manufacturers are building in obsolescence. Conversely, by releasing devices with adequate screens and fixed limited storage, your data becomes the property of your cloud provider.

MICROSOFT’S ANTI-COMPETITIVENESS

Microsoft was the sole provider of operating systems for third parties for the longest time. The relationship between OEMs and them grew to be very unilateral, where Microsoft was calling most of the shots. The theory here is that there was disrespect on some key issues which led to a deterioration in the relationship. The attribution here lies squarely with then-CEO Steve Balmer. Lauded for his commerce-centric policies and copycat product design; it is plausible to imagine some of these unfriendly underpinnings extending into corporate relations. The ultimate turning point was when under his leadership, Microsoft themselves became an OEM by releasing Surface tablets and purchasing Nokia Mobile. The situation must have grown utterly bitter under Balmer.

OEM PUSH FOR POWER

Souring relationship with Microsoft aside, by this time, OEMs have begun to capitalize on other technologies and form factors. In fact, many are putting their latest and greatest screens on anything and everything besides notebooks and/or Windows. Part power-play & part business-move, OEMs appear poised to dictate decisions going forward.

So why are tablets and phones getting the latest and greatest?

Slab phones and tablets are tougher to repair, more prone to break, and generally have shorter life cycles. Broken screens, cycled batteries, limited OS support, substantial leaps in ARM processor technology, and abhorrently generous globs of adhesive ensure shorter life cycles on these devices. More devices breaking means more devices will be purchased. More OS options empowers the OEMs as opposed to the OS creators.

MANUFACTURING FAILURES

In the early to mid 00s, the default mode of information viewing changed. Much like radio improved to TV, or black and white film progressed to color, the width of the screen was due for a civilizational change. The transition derived from screens with a squarish 4:3 aspect ratio and progressed to the now-standard 16:9 widescreen. The amount of pixels on each panel was 1280 x 1024 which needed to change to a 16:9 pixel layout. Options were 1366 x 768, 1600 x 900, or 1920 x 1080. The option with the least risk of pixel failure was 1366×768.

One suggestion is that during this time, as manufacturers began adjusting their equipment, they ran into trouble retrofitting and reprogramming their machines.

All manufacturing runs into issues whereby some of the stock each factory produced is trashed. With screens, failures can occur with respect to a number of issues. One of the most prominent issues though is “dead pixels”. Just like firecrackers, some pixels are duds. Their percentage is very small but they do exist. The smaller the size of the pixel, the more susceptible it is to failure. The more pixels on a screen, the greater chance there is for one pixel to be a “dead pixel”. Hence, 1366 x 768.

MARKETING SUCCESSES

The HD specification is an abomination. To clarify, the HD specification is an abomination.

The traditional reasoning behind specifications is regulation. The HD specification does no such thing. The HD specification actually entails two different resolutions 1280 x 720 or 1920 x 1080. Marketeers in their infinite trickery have employed the lower end of the specification and advertised it legitimately.

So to clarify again, “high definition” is undefined.

LOOKING FORWARD

With 4K and 8K screens hitting the market now, it’s hard to imagine that such an outdated screen resolution is being used still. It would be like monochrome monitors still being produced when the first consumer LCD monitors were being released in 1999. Hopefully, these screens will stop being deployed.