Gaming PS4 vs Xbox One - The suckiest thread in the history of suckyness

Which one will you buy?


  • Total voters
    538
It can¡'t help with anything on-screen. Forza is running at 60fps, that's 16ms per frame.

Would a game need to receive sophisticated AI info with each frame though? Wouldn't it be practical to have such instructions sent once every four frames? Such as setup might suit humanlike AI's anyway as the 64ms+ reaction time would more accurately reflect real brain latencies.
 
Launch title Forza 5 already makes extensive use of cloud based AI.


It calculates how you drive around a certain circuit.. Just so your friends can drive against a AI car that would drive similar. Would really not call that extensive, and i doubt anyone would really care if they canned that function tommorow.
 
The cloud can't help with anything realtime that's on the screen, latency is far too high for that.

As for AI in general, it's not complicated, one of my specialities was ANNs and discrete programming languages. The problem with ANNs is (or at least used to be, I'm out of the loop) is that once you approach the number of neurons of a fly (which if my memory serves me correctly is around 80,000 for the entire nervous system, around 15,000 for the brain itself) it breaks down, well, it breaks down in the fact that adding extra neurons don't provide much benefit.

Well you wouldn't use a decent neural net for gaming anyway! I mean they still don't get basic pathfinding right :lol:
 
In a game like Forza, A.I is never that complex anyway. Obviously physics is the big factor there, and you certainly couldn't use cloud for that.

AI is shit in driving games (by this I mean not very realistic rather than easy to beat). This could be a thing of the past though. I just don't see how latency effects AI really; the human brain has reaction times slower than the cloud's latency so how does the latency effect realism?
 
AI is shit in driving games (by this I mean not very realistic rather than easy to beat). This could be a thing of the past though. I just don't see how latency effects AI really; the human brain has reaction times slower than the cloud's latency so how does the latency effect realism?

Well for a start you can't replicate the human brain, certainly not on these systems.

Plus the problem isn't so much making a system capable of thinking as we'd expect, much like physics it's making that system fun.
 
It calculates how you drive around a certain circuit.. Just so your friends can drive against a AI car that would drive similar. Would really not call that extensive, and i doubt anyone would really care if they canned that function tommorow.

The impression I get is that it's more sophisticated than that.

The game learns your driving style allowing AI opponents to evolve over time. If AI opponents can react to the player in this way then surely they'd be able to react to one another also? The overall effect would be much more realistic, reactive racing rather than the emotionless blobs bumping each other along predetermined racelines that we're used to.
 
Well for a start you can't replicate the human brain, certainly not on these systems.

Plus the problem isn't so much making a system capable of thinking as we'd expect, much like physics it's making that system fun.

Obviously they can't replicate human brains. That's nevertheless the goal though, isn't it?

They can easily replicate human reaction speeds and so I don't see how latency falling behind framerate is relevant to AI.

Cloud based AI wouldn't be able to react immediately from one frame to the next, but real human brains don't react that fast anyway so I don't see how this is an obstacle in the way of realism.
 
The impression I get is that it's more sophisticated than that.

The game learns your driving style allowing AI opponents to evolve over time. If AI opponents can react to the player in this way then surely they'd be able to react to one another also? The overall effect would be much more realistic, reactive racing rather than the emotionless blobs bumping each other along predetermined racelines that we're used to.


Well the A.I characters will be just be drivers modeled on how friends, or probably how over people drive. It may see how you drive and pick opponents to go against you because of that. Which is far enough, but as i said i really doubt anyone would really be that bothered if they gave you standard A.I drivers instead.
 
Well the A.I characters will be just be drivers modeled on how friends, or probably how over people drive. It may see how you drive and pick opponents to go against you because of that. Which is far enough, but as i said i really doubt anyone would really be that bothered if they gave you standard A.I drivers instead.

:lol: feck it then!

Personally I think such developments sound really innovative and potentially game changing, but if everyone else is just happy with the emotionless computer opponents of old then who am I to argue?
 
:lol: feck it then!

Personally I think such developments sound really innovative and potentially game changing, but if everyone else is just happy with the emotionless computer opponents of old then who am I to argue?
Everyone wants real AI but that's science fiction. You make it sound like it's some trivial thing which will be unleashed by the cloud. Well, I hope you're right.
 
Everyone wants real AI but that's science fiction.

So we should just give up on progress because we're unlikely to achieve perfection? What's wrong with you people?!

"Yeah it might be an improvement, but so what, the previous lot was passable!"

What kind of gay gamer plebs are you?
 
Here I've found the quotes from the Forza developer: link


When you've got a learning neural network, more computing power is nothing but helpful.

Because what you're able to do is process a lot more information, and you don't have to do it in realtime on the box. And that frees up more of the box to be doing graphics or audio or other computational areas.

So we can now make our AI instead of just being 20 per cent, 10 per cent of the box's capability, we can make it 600 per cent of the box's capability. Put it in the cloud and free up that 10 per cent or 20 per cent to make the graphics better - on a box that's already more powerful than we worked on before.

The article claims that it's likely an exaggeration, but the numbers the guy's talking about are quite astonishing.

For argument's sake let's assume that a console can crunch in-game data at 100Gb/s:

Without cloud technology and assuming that AI and other non-graphical components takes 20% of that processing power then this gives us:
  • 20Gb/s AI
  • 80Gb/s Graphics

With cloud technology offering a further 600% total processing power for non-graphical components, freeing up the console hardware for purely graphical processing you get:
  • 600Gb/s AI
  • 100Gb/s Graphics

A 20% increase in graphical capabilities and a 3,000% increase in non-graphical capabilities with the cloud then. Even if somewhat exaggerated I don't see how anyone can fail to be excited about this stuff.
 
I said you make it sound trivial and you assume for some reason that the cloud will bring you AI. Good luck with that.
 
Here I've found the quotes from the Forza developer: link




The article claims that it's likely an exaggeration, but the numbers the guy's talking about are quite astonishing.

For argument's sake let's assume that a console can crunch in-game data at 100Gb/s:

Without cloud technology and assuming that AI and other non-graphical components takes 20% of that processing power then this gives us:
  • 20Gb/s AI
  • 80Gb/s Graphics
With cloud technology offering a further 600% total processing power for non-graphical components, freeing up the console hardware for purely graphical processing you get:
  • 600Gb/s AI
  • 100Gb/s Graphics
A 20% increase in graphical capabilities and a 3,000% increase in non-graphical capabilities with the cloud then. Even if somewhat exaggerated I don't see how anyone can fail to be excited about this stuff.

It's a load of bollocks!
 
If Everquest Next is coming out for PS4, that is a better 'exclusive' killer app than anything Xbone has shown based on what has been shown so far. Looks fantastic, and a proper next-gen MMO.
 
Neural networks sadly don't scale linearly.

Generally-speaking, the only thing impressive about the "cloud" is that it exists. You can do offline AI computation (heck, offline computation in general) without cloud computing, and people have been doing that for years. Heck, you can do cluster-based computing without Microsoft's cloud solution, and we've been doing that for decades (i.e. PS3s and Folding@Home is a "gaming" example). My guess is that Microsoft is going to let developers use Azure at a reduced rate somewhat.

This is simply about Microsoft trying to get the most out of its Azure infrastructure, since the more it is utilised, the more money it makes.

109703.strip.gif
 
Are you being serious?

I need to ask before I laugh at you!

Yes I'm serious. I can understand why graphics need to be freshly rendered every 16ms, but I don't understand why AI needs to be rendered as quickly.

Obviously a certain degree of non-graphical processing needs to be done in realtime in order to tell the console exactly what to render every 16ms for each frame, but for more complicated algorithms such as AI decision making I don't see why that can't be achieved even with a 60ms latency.

It's a load of bollocks!

How so?
 
Yes I'm serious. I can understand why graphics need to be freshly rendered every 16ms, but I don't understand why AI needs to be rendered as quickly.

Obviously a certain degree of non-graphical processing needs to be done in realtime in order to tell the console exactly what to render every 16ms for each frame, but for more complicated algorithms such as AI decision making I don't see why that can't be achieved even with a 60ms latency.



How so?

I've said before, it can help with things that are not happening on the screen, it cannot help with things happening in real-time as the latency is just to great.

You seriously underestimate the human brain BTW.
 
Your TV can introduce as much as 150ms of latency alone, and it's plugged into the console with a 1ft cable. Processing takes time, information passing back and forth to servers takes time. It's not useable in real-time.

What would you estimate would be the full latency in milliseconds then for the console to send the gamestate to the cloud server, the server to crunch the AI decision making and send said decisions back to the console?

Your assertion above made out that if this period of time was any greater than 16ms then it'd be unusable. Obviously for graphics rendering this would be the case if one wishes to achieve 60fps, but you haven't explained why other game components cannot possibly take longer than 16ms to process as you originally asserted.

Using Forza as an example: if I take a corner wide then I wouldn't expect any AI opponent to respond to that within 16ms as neither would I expect any real racing driver to respond so fast. Why can't such AI computations take a little longer then, just as would be the case in a biological brain?

I googled human reaction speed and it says between 150-300ms. That's instinctive reaction though without decision making factored in.

Isn't it true that entire games can be played via the cloud using existing technology? Though they likely won't achieve 60fps this way they can't be far away in terms of latency otherwise the controls would be fecked and the game unplayable.
 
Here are some numbers from a study on services such as OnLive:

http://www.iis.sinica.edu.tw/~swc/onlive/onlive.html

To summarize, OnLive's overall streaming delay (i.e., the processing delay at the server plus the playout delay at the client) for the three games is between 135 and 240 ms, which is acceptable if the network delay is not significant. On the other hand, real-time encoding of 720p game frames seem to be a burden to SMG on an Intel i7-920 server because the streaming delay can be as long as 400-500 ms. Investigating whether the extended delay is due to design/implementation issues of SMG or it is an intrinsic limit of software-based cloud gaming platforms will be part of our future work.

TVs have higher latencies than monitors, too. Toss in another 50ms or so (which I think is towards the better end).

Bandwidth is another consideration. You had developers complaining about the RAM on various consoles in this generation. The bandwidth of RAM is in the hundreds of megabytes per second. The average US broadband connection has a download speed of 1 MB/s.

The fluctuation in these numbers will also be large in comparison to two pieces of metal in your console. This would posit a design that is somehow "additive" in nature in the sense that the console must be able to render a frame with varying levels of information.

For example, for simplicity, if there were 4 pixels to render, a naive design would be to hand off 1 pixel to the cloud, and 3 to the console itself. But that 1 pixel might not arrive in time. So what do you do? Render nothing for that pixel? You would actually need to render all 4 pixels on the console (possibly badly), enriching it with that extra information from the cloud if it arrives. This can be a pretty difficult operation because not all operations "add" together nicely (what does it mean to add two textures together?). The "addition" of information also takes processing power (processing power taken away from real-time processing).

On a slow connection, you can see the effects on site like YouTube, where you sometimes hit the limit of the buffering, and the audio/video stutters/blurs, before it smoothes itself out again - you definitely notice it. And that's not vector processing - it's just compressed video, which is pretty easy in comparison.
 
But but AI? You don't need it every time, it'll be calculated and thrown back and you'll have a virtual Michael Schumacher giving you the finger after throwing you off track!!1
 
Here are some numbers from a study on services such as OnLive:

http://www.iis.sinica.edu.tw/~swc/onlive/onlive.html



TVs have higher latencies than monitors, too. Toss in another 50ms or so (which I think is towards the better end).

Bandwidth is another consideration. You had developers complaining about the RAM on various consoles in this generation. The bandwidth of RAM is in the hundreds of megabytes per second. The average US broadband connection has a download speed of 1 MB/s.

The fluctuation in these numbers will also be large in comparison to two pieces of metal in your console. This would posit a design that is somehow "additive" in nature in the sense that the console must be able to render a frame with varying levels of information.

For example, for simplicity, if there were 4 pixels to render, a naive design would be to hand off 1 pixel to the cloud, and 3 to the console itself. But that 1 pixel might not arrive in time. So what do you do? Render nothing for that pixel? You would actually need to render all 4 pixels on the console (possibly badly), enriching it with that extra information from the cloud if it arrives. This can be a pretty difficult operation because not all operations "add" together nicely (what does it mean to add two textures together?). The "addition" of information also takes processing power (processing power taken away from real-time processing).

On a slow connection, you can see the effects on site like YouTube, where you sometimes hit the limit of the buffering, and the audio/video stutters/blurs, before it smoothes itself out again - you definitely notice it. And that's not vector processing - it's just compressed video, which is pretty easy in comparison.

That study is two years old, and despite all the obstacles reported OnLive manages to work. It's an interesting piece but not entirely relevant to the Xbox. Remove the crucial graphics component as this would be processed by the console and where does that leave you? You have to also acknowledge that such technology has been developed with a full 4G rollout in mind so the 1mb/s average download speed restriction will soon be (already is in fact) entirely ancient.
 
Cider, just give it up! Really, the spiel from Microsoft isn't what is going to happen. The reality is that it cannot happen, it's impossible. I'm a computer scientist telling you that it cannot happen!

If Sony was putting out this bullshit, I'd also say that they are talking bollocks.
 
Cider, just give it up! Really, the spiel from Microsoft isn't what is going to happen. The reality is that it cannot happen, it's impossible. I'm a computer scientist telling you that it cannot happen!

If Sony was putting out this bullshit, I'd also say that they are talking bollocks.

You don't seem able to say why it cannot happen though.

Many infinitely more prominent and accomplished computer scientists seem to think it rather important so forgive me if I don't take your word for it, man.
 
Despite all that OnLive manages to work. It's an interesting piece but not entirely relevant to the Xbox.

Remove the graphics component as this would be processed by the console and where does that leave you?

It is relevant in the sense that some processing is done off-site and even then, OnLive just barely works, with noticeable graphical tears, slowdown and jerkiness.

I don't see why you need to assume the graphical component will be done on the console, when things like textures and lighting have been explicitly mooted for cloud processing on the Xbox One.

Games are fine with our reaction speed - for now. Slap on another 150ms? That starts to hurt.

Offline processing? Perfectly fine. Turn-based games like Civilization? Perfectly fine. Real-time AI? Nope.

You have to also acknowledge that such technology has been developed with a full 4G rollout in mind.

What does 4G have to do with it? The Xbox One isn't a mobile console.

You don't seem able to say why it cannot happen though.

Many infinitely more prominent and accomplished computer scientists seem to think it rather important so forgive me if I don't take your word for it, man.

"Cloud" is a buzzword. Experts dislike that word in itself.

When it comes to cluster-based software and SaaS, then yes, various computer scientists think it is important. However, latency and security are the biggest concerns when it comes to these things. "Cloud" isn't a magical bullet, but a way of being able to scale well and improve reliability at the expense of latency. It's great for batch processing and massively-parallel processes, but latency is the main thing, performance-wise, that is a concern. Little beats pieces of metal millimetres away from each other in that regard.
 
All of this stuff works when time dependence is not important. Take for example raytracing a scene. On your computer at home it might take several hours, yet if you patch if off to a massive server cluster it might take 20 seconds. The problem is that games are real-time systems, they need the data on a per-frame basis, which is as an example every 16 milliseconds...... here we have a serious problem. If you can get the data out, processed, and back again faster by using the cloud, great, but it depends how time dependant you are. The reason that Amiga games didn't look like PS3 games (they could without a problem) is that they would run at one frame per hour.
 
As said, partial cloud should be useful for some things, but anything actually affecting the user is either full cloud or not at all.

Besides, we are talking video games here. As I said earlier, you don't want 'real life' physics and A.I, because it just wouldn't be fun for most!

At this level you really wouldn't tell the difference between adaptive A.I and clever pathfinding techniques anyway. And it's the latter mostly used, but that hasn't moved on at all. There's the problem, not NN and cloud help.
 
Correct, the boxes themselves don't have the horsepower to use real AI techniques, yet AI needs to be done on a frame by frame basis, making server clusters useless. The AI used in games is therefore smoke and mirrors, as is graphics to a great extent, it's just that one has moved on and the other has not, mainly because people notice one far more than the other. People are not wowed by false AI techniques, yet they are by millions of particles flying all over the show.

Going back to the Onlive example, it would be far more beneficial to do it that way as the same cluster would be doing the AI, running the game code, and rendering the scenes. In terms of latency that would be much preferable than having your own box deal with the game code and graphics and then trying to farm out the AI and physics. You might be able to get down to 150ms in general cases with that model. Depending on the horsepower available, you might even be able to pull off real-time raytracing. How much does a Roadrunner per user cost though?
 
The problem I see with it is what happens to a game that heavily uses cloud compute and your net connection goes down? Does the game stop working? Does it just downgrade what it does?

What happens say if you lose your job and you have to cancel your net connection?

Focus on getting a new job then worry about playing Xbox?
 
I just seen The Division isn't even released till q4 2014 :( Not even excited about the launch of the consoles anymore.