Only benefit that I can think of is that most of the solutions that exist are custom or part of an engine (like Unreal), a standard one like this that works across hardware could see wide adoption from studios that may not have the resources to roll their own? Not sure tbh
The tech that means a resolution that isnât native 4K at all but looks like it is ,and they can use those resources for other things. Is that part of ML? Hopefully that tech is ready and being used for all the big ones like Starfield,Avowed,Fable and so on.
Because that could do wonders for these big open detailed worlds which Iâm sure they want to look the best they can but also not have to worry about only having 30fps anymore.
DLSS uses ML, supposedly Microsoft has talked about upscaling that uses ML but I donât think we know much about it. Wouldnât count on it for Starfield, thatâs for sure.
Its worth noting that NVIDIAâs cards have sigbifigantly more resources for this sort of thing than XSX does.
Hmmm I see.
And on the subject of ML, this has nothing to do with RDNA2, does it?
Yes but FSR doesnât use machine learning. Itâs also an upscaler, but unlike reconstruction methods like DLSS, it isnât constructing missing pixel info when it upscales. It just scales it and then cleans the image up using edge enhancements and adaptive contrast. PS5 can use FSR, but the performance ceiling theoretically is 2x compared to ML SR (which is around 4x).
Their âfull RDNA2â news made it sound like something in addition to RDNA2 but not sure.
Correct. It is stuff MS added on top of what AMD offered them with RDNA2. MS has discussed it a few times now (DF interview, HotChips conference). As @pg2g noted too, Nvidia cards have âdedicatedâ ML inference hw whereas XSX merely has inference acceleration hw.
Ok I see. I wonder when weâre gonna start seeing it though, even if it isnât as good as Nvidia.
Iâm guessing they want to not only get the tech in place but more importantly get it into the GDK in a super easy-to-use way so it become ubiquitous on the platform.
Yeah the CUs in Series X|S support half precision FP operations or something like that which is necessary for tensor calculations like matmul, convolve etc. But RDNA2 GPUs do not have dedicated coprocessors like Nvidiaâs Tensor cores. DLSS 1.x used to run on shader cores but since 2.0 they moved to dedicated Tensor cores.
Suppose you are willing to play the game at 1440p instead of 4k so that you can reach your desired frame rate. FSR will give you a better IQ compared to 1440p in this case but itâs short coming is that it only works on Edge. Thatâs where it falls short to DLSS.
There are many differences between ML/AI upscaling vs stuff like FSR.
But when it comes to end result : FSR only works on Edge like Anti-alising stuff. Whereas DLSS works on every pixel and the whole frame is upscaled (not just the edges)
FSR produces a result which is better then resolution itâs been upscaled âfromâ but closer to it.
DLSS produces a result which is closer to the resolution it been scaled âuptoâ and sometimes even better then that.
But then FSR is available on last gen consoles and RTX requires atleast an RTX branded card. So FSR is justified in its own case.
DLSS gives results similar to downscaling 8K to 4K.
ML is possible on PS5 or any other graphics card without int8 or int4 support.
Itâs just that the lower the precision the ML could work with, the better the performance will be there per hardware.
If a ML inference works on int 8 then it can also work on FP16 as well. It will only be half as efficient or half as slow.
Between Series X and PS5⌠Series X could do twice the ML calculations based on int 8. And 4 times the calculations based on int 4.
X|S will use INT operations for most ML inference workloads, not FP. Nvidia uses INT8 for inference with DLSS2.x, which X|S have (they also have INT4, even lower precision). The distinction is that X|S do not have dedicated hw for ML. They do have hw acceleration, but it runs on the same CU resources other rendering tasks do, so there is contention there. On Nvidia cards, there are dedicated compute resources for ML workloads.
Wow, really impressive showing on both consoles. SX is night indistinguishable from PC aside from the resolution.
Asobo did an incredible job all around.