Quantic Dream: Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores

Especially when he said 10,28 is peak perf. So what is the base clock then? Lol. I won’t trust him either. Too shady.

PlayStation 5 GPU is 9.2TF overclocked to 10.3TF. If the developers choose to go with the CPU over GPU, then 9.2TF will be the low point. I do expect the majority of developers to go with the GPU over the CPU though.

2 Likes

Another thing that no one considers when making such claims is that performance does not scale linearly with overclocking on Pc, quite the contrary, with 20-30% overclock giving low single digit performance gains.

So the assumption that even when it’s at 2.23ghz that the performance will scale linearly with that is wrong. (And you can literally test that on a pc gpu with liquid cooling and making the clocks static at 2.X+)

1 Like

Cerny says it neither will drop much so best to ignore any tiny fluctuations tbh.

Where did he contradict himself? o.0

Even at 10.28tf peak performance their is diminishing returns on overclocking the gpu you can see this on a 5700xt overclocked to 2.2 ghz and it gets at most 2-5 more fps. I know that’s rdna 1 but it will be interesting to see the effects on rdna 2

1 Like

One example was using geometry processing to justify the higher clocks as geometry processing is hard to parellize due the unified index buffer.

But he ignored all advancements around that limitation, including the Primitive Shader himself announced, which as shown by Epic in the UE5 can also be used to enable the full CU processing power into geometry processing and make it possible to drastically increase the polygon density of the scene with better framerates than the old pipeline.

1 Like

The UE5 example is maybe too specific to hold against him imho.

I am more curious to find what the CPU performance would be in typical game, when the GPU is 10TF. Cross gen games will probably not be stressing the CPU much since the games need to run on gen8 consoles.

1 Like

Not really, epic was specifically calling out how in modern gpus there is so much power and flexibility to finally handle complex geometry on them, and was also a key talking point in Dx12 and Nvidia gpus even at last year gdc.

Basically now gpus are so flexible that you no longer need to input your geometry in one of the few expected ways, you can actually create a “software approach” to process your geometry. And that is also one of the goals of the Geometry Engine he announced right after making such claim.

I dont doubt that his road to ps5 video was also influenced by others at sony. Playstation know marketing, they knew “power” and “tflops” were big marketing points, they have used them in the past many times, some of the language used in “road to the ps5” was presented and choosen to try and give the best impression possible. Hopefully we get more details on how the boost system works, as I mentioned in the post prior to the one you replied to, cerny stated some info which contradicts itself or is not explained in any detail.

I really want this question answered: If only a few percent of downclocking is required why not run the SoC locked at these slightly reduced clocks? Surely the advantage of having a static clock would outweigh the benefits of a 100mhz higher clock.

1 Like

As per Cerny, in worst case scenario, the frequency on both CPU and GPU only drops by 5% (something in line with what Cerny said as couple of frequency)

This implies - at that time CPU will be 3.325 GHz and GPU will be 2.1185 GHz

Now come to contradiction:

Statment 1. CPU at 3ghz and GPU at 2ghz was not possible on PS5 SoC without smartshift

Statment 2. Now with smartshift, 3.325 GHz CPU and 2.1185 GHz GPU is running fine.

S1 and S2 contradicts themselves.

All though i do have a little understanding of how it is possible but the problem is Cerny himself doesn’t clear it up so that understanding is an assumption at this point. Without that assumption, these two statements remain contradictory.

1 Like