Machine learning, and in particular Nvidias DLSS has shown amazing results, and the potential of what is going to happen in the future as this tech matures even more.
So what can we really expect to see from the XSX with regards to ML?
The XSX lacks the dedicated Tensor Cores that Nvidia RTX cards have, and as a result the XSX doesn’t have the same amount of INT etc as those RTX cards.
However the XSX has custom hardware that allows the XSX to perform ML calculations above and beyond the PS5 and RDNA 1 cards.
On top of this, there is more to ML than just having high INT flops on the card itself.
There is software required, and luckily MS has Direct ML for this, as well as super computer banks that do the heavy lifting outside of the console. Again, as we know MS has as much super computer availability as anyone.
Now while its true XSX doesnt have dedicated Tensor Cores and all ML work will need to be done on the Shaders themselves, it costs less to do the image generation through ML than native on the Shaders, so the XSX will be more efficient using ML than traditional shader work.
For us that means the XSX could output a 4k image for what would normally cost to output say a 1440P image, allowing for higher frame rates in the game.
Exactly how many devs will use this, I’m not sure, as very few have opted to use DLSS a this point as well.
DLSS is great, but texture upscaling at runtime is even better, especially paired with VRR and and VRS. XGS studios (well we know for sure 1 at the very least) are already using the ML hardware in XSX to do texture upscaling at runtime as players move around a scene. Why is this a big deal?
You can then ship low res textures in the actual final game build that ppl buy or download, making the file size MUCH smaller. Not only is the download smaller (and thus faster AND better on your data caps), but the file size for these textures on the SSD are vastly smaller too (1/4 as big or less). So a lot more games can now fit on the SSD is this becomes commonly used. Better still, when streaming from SSD–>RAM the I/O needs for these textures is vastly reduced, which has the effect of multiplying the effective I/O by x4 (or more). Game textures make up more than half of modern game I/O needs, so this would make XSX’s effective I/O already much better than the competition. Once streamed into RAM, the textures are still tiny, so you can fit many more unique assets in RAM, which makes the RAM effectively bigger and means less need to stream from the SSD. Then some GPU resource are used to do the ML IT ops and infer the higher res texture and voila, the result on screen is identical from using high res textures through the whole pipeline.
I’d like to see this become the norm. MS has a team doing this already as I noted, and it is said to have worked “scarily well” according to Jeff Gwertzman.
I’d also like to see it applied to textures in older gen titles. I’d like to see HDR, texture upres, DLSS, RT denoising, animation blending, framerate interpolation…all should be doable on XSX.