Quantic Dream: Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores

The CPU of the two consoles uses the same processor (slightly faster on Xbox Series X), the GPU of the Xbox also seems more powerful, as it is 16% faster than the PS5 GPU, with a bandwidth that is 25% faster. The transfer speed from the SSD is twice as fast on PS5.

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI).

This is something I mentioned on the other thread, where is looks like the DirectML support has been added to the shaders and not as I thought, the CPU:

With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

“We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms,” says Andrew Goossen. “So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning.

Interesting times ahead, especially if MS get their own (or AMD’s) version of DLSS working, especially on Series S. It makes sense for MS to add that support to Series X hardware since that could also be used in Azure Server Blades, not just for streaming xCloud content, but also for performing ML tasks in Azure workloads.

This was part of MS’s plan, to increase the returns from the investment made into creating the Series X hardware. That hardware was designed with input from the Azure team.

7 Likes

It’s still unconfirmed but by the looks of this it appears that even 6000 series GPU doesn’t have int 8 & int 4 support.

2 Likes

Which appears to add to the evidence that ML support was something that MS added, on top of the full RDNA 2 menu they ordered from AMD. No word yet if Sony did similar for PS5.

1 Like

Lol at people who keep on saying the xsx gpu is only 16% faster, its 18% faster in the best possible scenario when the ps5 gpu is running at 10.28tf, however its probably going to run at 10tf most of the time which a 21% difference.

3 Likes

MS did say they had to request it be added in for XSX. It’s certainly strange though. How did AMD/Sony both not think about it??

3 Likes

Andrew Goosen told us that first sentence there. What you outlined is 100% explicitly confirmed, factual info. :slight_smile:

Rosario Leonardi (Principle Graphics Engineer @ Sony) said PS5 had no ML stuff at all.

2 Likes

I keep on hearing that PS5 is custom RDNA2 but so far it appears to me that XSX beats PS5 in this ‘custom’ category as well.

But man that SSD, i believe it will surely add atleast 10 fps because of the hype support its getting

Lol

1 Like

Interesting that QD is saying this. I honestly can’t wait to see games making full use of ALL the tech it offers, should be a sight to behold.

1 Like

Can you guys help me with the math here? I get the top theoretical 10.2tf when the PS5 is pushing the GPU to it’s limits. But with smartshift and Sony’s vague implementation of it, how can we truly measure the CPUs or GPUs, particularly in real world performance when Sony has only given the theoretical max of each that (likely?) will never be concurrent?

Or are we taking Mr. Cerny at his word about a “few percentage” drop and making estimates?

And around 20% assuming that on the overclocked ps5 gpu the performance is going to scale linearly, which is not what usally happens, far from it. And sony does not even has the balls to say what is the base clock of this thing, just to make ppl think that the thing is indeed 10TFlop as everybody is reporting.

1 Like

Only devs knows for sure but my gut is telling me its a 10tf machine. Cerny and his boost clock schnanigans are a contradiction, he said fixed clocks were overheating at 3ghz for cpu and 2000mhz for GPU, yet he claims in the worst case scenario the clocks only need a few percent clock reduction, so the PS5 according to cerny can run at 3.4ghz and 2180mhz, well im thinking if it can run at those clocks all the time why the hell was it overheating at 3ghz + 2000mhz?

Sorry if i have turn my answer to you into another question.

1 Like

You did! :rofl:

But it works to illustrate my point. The variability in PS makes comparisons difficult. And from my consumer stance my hackles are raised with red flags coming from Sony. So much spin and dodgyness. At work, I’d have dropped a vendor like a hot potato for this marketing.

I really do want to get into the shallow end of Sony’s IP for the games I couldn’t play on PS3 and missed by not buying a PS4. But this console has too many questions and I still don’t trust the PS engineering after PS 3 & 4 (nor their back compat situation!). I guess I should fully reserve myself to getting a revision if they do one.

I have been saying and wondering about this since the presentation. It’s really baffaling.

1 Like

Until we have a good amount of real world examples proving otherwise, yes, it’s best to give Cerny’s claims the benefit of the doubt. Not that I trust him all that much, but the alternative is opening ourselves to wild speculation from various “experts”, which is even worse. Give it some time, and we’ll find out the truth.

1 Like

Trust Cerny. Presume it’s a 12.155 vs 10.28 TF comparison.

I dont trust somone who contradicts themselves.

1 Like

Ok. But at this GPU performance, the CPU drops to what? Which then has cascading effects to frame rate and more (and like for like comparisons on multiple platforms).

I’m not to “don’t trust” yet (outside of heat and acoustic management based on PS 3 & 4/Pro). But what little I’ve digested of non-linear over clock performance (and some personal experimentation here) plus smart shift, I’m skeptical.

Edit: I think @TavishHill clears up the math, though. Performance percentage differences seem to always be based on GPU. So then @LifeForms percentages can be seen in perspective if we are assuming full theoretical performance on Sony’s high click speeds.

That’s why you should read carefully what MS said recently…:wink:

Exactly! You are 100% right.

“All of these next-generation capabilities are available via hardware in both the Xbox Series X and Series S and we are excited for them to also come to PC, providing a common set of features that developers can rely on when developing their games across console and PC.”

“To deliver on this vision we wanted to leverage the full capabilities of RDNA 2 in hardware from day one. Through close collaboration and partnership between Xbox and AMD, not only have we delivered on this promise, we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution.”

They clearly said it they have gone further by adding next gen innovation like Hardware Accelerated Machine Learning.

By hardware it also clearly means it has also dedicated cores just for that but naysayers won’t accept it. This console is a beast.

1 Like

In this circumstance cage was referring to the GPU.

But your right the gpu is only one part of the equation, the higher memory bandwidth is just as important, in pc gpus more bandwidth always results in better performance.