Tag Archives: gaming

Destructive Environments: Up in the Cloud

During Microsoft’s Build 2014 conference, one of many interesting projects demoed was a prototype game using both local and server resources. Displayed onscreen was, the intricate dismantling of a multi-story building. When one glass pane was destroyed by the operator’s industrial laser weapon, it would shatter into hundreds of pieces. These chunks of concrete would then descend through gravity before shattering into hundreds of additional shards. The demo, later revealed to be a very early build of the next Crackdown title on Xbox One, was demonstrated primarily to showcase one way whereby the Xbox One might supersede its competition on the graphics front. The disparity in performance between the DDR3 memory of Microsoft’s console pitted against the GDDR5 memory in Sony’s PlayStation 4 has been covered in the games media ad nauseam. But having just wrapped up Build 2015, Microsoft has yet to release a title for its Xbox platform with the demoed tech.

Prototypes can sometimes be misleading. So skepticism regarding Microsoft’s claims is understandable. But in this case, the skepticism can be readily dismissed as the enmeshing of local and server resources for gaming purposes has already been accomplished. Titanfall, EA’s multiplayer-only game, delegated real-time enemy AI to an Azure server. NPC spawning, movements, and clean-up was computed “in the cloud” & then pushed to players in real-time. Most importantly, Titanfall executed well. The game performs as expected.

While AI & destructible environments are two different components in a game engine, they each share underlying code. Spatial positioning, placement, and movement are traits both components share. The offloading of these computational intensive tasks to a more powerful server, which then distributes the values for anyone connected to that particular game session, is again a distributable concept.

This technology was examined over a year ago and has yet to show up again in any publicly available games – alphas, betas, or otherwise. So where is it? Perhaps it is in HoloLand, a narrow place where $2 billion Minecraft and $4 billion Nokia acquisitions reside.

Source: Polygon

6/18/15 UPDATE: E3 2015 has just concluded and this graphics enhancing technology is still MIA.

8/04/15 UPDATE 2: Crackdown 3 was finally revealed at Gamescom 2015 with this tech in tow.

Cloud Gaming: Can inventive mathematics overcome infrastructure woes?

Shortly ensuing yesterdays’ post on current-gen console lifespans, Microsoft published the findings of a research project named DeLorean. Contained within, exist a set of algorithms and computations to proliferate the possibility of cloud gaming to a realistic point.

According to WPCentral’s Sam Sabri,

“Most gamers deem the responsiveness of their game unacceptable when the latency in exceeds the 100ms threshold. Something that isn’t that uncommon with most cellular and Wi-Fi networks.”

While this certainly holds true in multiplayer situations, the latency being referenced is external data. Input from a controller is a data point previously unaffected by network latency due to the direct connection between controller and machine. However with cloud gaming, controller inputs are being sent to a machine over internet wires, switches, and routers spanning several thousand miles. The computation occurs there and the video stream is sent back along the same path. Typical latency from Comcast Miami to the nearest Microsoft server farm is about 50 ms. Remember that the video stream also has to come back along that same path, adding another 50 ms, combining for 100 ms of delay. 1/10 of a second hardly seems like much time, but in a world where games render at 60 frames per second, the difference is quantifiable.

DeLorean will attempt to overcome these challenges by using some very sophisticated mathematics to speculate and predict what the person behind the controller will do. DeLorean’s equations explained in the research paper are above the comprehension of society at large, but it would appear that the following researchers and institutions do.

Kyungmin Lee, David Chu, Eduardo Cuervo, Johannes Kopf, Sergey Grizan, Alec Wolman, & Jason Flinn

Univeristy of Michigan, Microsoft Research, Siberian Federal University

Whether DeLorean’s theoretical models will hold up across a vast and varied broadband infrastructure remain to be seen. Cloud gaming has tried before and failed miserably due to infrastructure woes. OnLive launched was founded in 2003 and released to market in 2009. The service was widely panned as a laughable attempt to create a new market. Press demos were conducted very close to OnLive’s data center and real-use scenarios did not hold up nationwide.

But it’s currently 2014 and much has changed. Average internet speeds are faster. Pipes have been upgraded to carry higher capacities. Heck, even Netflix and YouTube are streaming content in 4K. And game companies are ready to take the tumble once more. Sony has already built out a cloud gaming platform called Playstation Now. Early reviews of the beta indicate acceptable performance on industrial fiber connections. Other reviewers on more basic residential connections echo discontent with responsiveness.

Theoretically, DeLorean seems like a great idea. But even with the inclusion of these advanced algorithms, latency issues tied to cloud gaming will continue to plague it’s relevance until broadband infrastructure is upgraded. 

Source Microsoft Research via Neowin via WPCentral

A shorter lifespan for this generation of game consoles

This generation of console war has already been decided. When Sony and Microsoft both displayed their gaming consoles at E3 2013, it was easy to observe both jarringly different approaches. Microsoft was clearly aiming to evolve the living room into an automated hub. This new living-room relied on a freshly installed data center infrastructure to create a continuous flow of information between the people and their technology. It was and still is, a revolutionary vision of what is possible in the always connected world. Sony’s focus was on evolving what worked with game consoles in years passed. Perhaps, such an approach can be insightful to Japanese thinking. Japan’s tradition-based society is well documented in both literature and film. Both machines use very similar components from AMD in order to make cross platform development easier for third party developers. Despite nearly identical architectures, Sony’s PS4 includes one differentiator for relative purposes of game performance, the inclusion of DDR5 memory.

When game consoles process information, that information goes through a series of systems designed to deliver the information most effectively. Some steps in this system pass the information off to another step more quickly. One step in this system is DDR memory. The PS4 uses DDR5 memory which transfers at a maximum rate of 176 GB/s. Xbox One uses DDR3 at a maximum transfer rate of 68.3 GB/s. The Xbox also includes another very small buffer step (0.5 percent of total memory) which has theoretical transfer rates of up to 191 GB/s. This design choice is also insightful. It can be inferred that Microsoft’s mindset emphasizes the efficient transference of information between systems. Performance wise, DDR5 has a stilted leg up on DDR3.

Both companies have unique goals to accomplish with this generation of hardware. Japan’s Sony intends to upgrade the same experience we’ve known since last generation. America’s Microsoft ambitions lied with introducing a wholly new experience. Microsoft has already retracted from many decisions with this console based on consumer backlash. They have struggled with the consumer segment for several years and wanted to publicly grant consumers’ wishes. The problem here was the underlying philosophy with the Xbox One was never geared towards providing the consumer with best experience.

Xbox One was purposefully planned to help Microsoft flesh out new infrastructure and train programmers, who happen to be working on games. The idea of incorporating cloud systems into areas of gaming such as AI, lighting, or geospatial deployments can find roots in “doing more with less”, pooling resources, and good logistical flow of information. The consumer was a secondary consideration here, and money which could have spent on DDR5 memory was used instead to build a few additional servers in Microsoft’s new data centers. The noticeable differences in memory performance are minimal at the moment. But as game developers better harness the power of these new systems, the disparity in quality between games will become more easily discernible. The effects of this will likely be a shorter hardware generation for Microsoft.

Rather than the 8 years between the 360 and Xbox One, a lessened timeframe before the arrival of Microsoft’s next living room hub should be expected. In five years, broadband speeds will have doubled or tripled & UHD televisions will be mainstream. A demand for a console which can take advantage of these leaps in technology will echo consumer forums from the US to Japan. Microsoft will have exhausted less resources than Sony by this time and would have a more viable justification to build anew. Also accountable to decisions this go-round will be better trained developers and the infrastructure to support cloud-based computing for a decade or two out. Sony will retain good brand image amongst consumers but will have done little else in the way of future proofing the company’s other interests or better training programmers for other industries.