Games Analysis |OT| Time To Argue About Pixels And Frames!

Only benefit that I can think of is that most of the solutions that exist are custom or part of an engine (like Unreal), a standard one like this that works across hardware could see wide adoption from studios that may not have the resources to roll their own? Not sure tbh

1 Like

The tech that means a resolution that isn’t native 4K at all but looks like it is ,and they can use those resources for other things. Is that part of ML? Hopefully that tech is ready and being used for all the big ones like Starfield,Avowed,Fable and so on.

Because that could do wonders for these big open detailed worlds which I’m sure they want to look the best they can but also not have to worry about only having 30fps anymore.

DLSS uses ML, supposedly Microsoft has talked about upscaling that uses ML but I don’t think we know much about it. Wouldn’t count on it for Starfield, that’s for sure.

Its worth noting that NVIDIA’s cards have sigbifigantly more resources for this sort of thing than XSX does.

1 Like

Hmmm I see.

And on the subject of ML, this has nothing to do with RDNA2, does it?

Yes but FSR doesn’t use machine learning. It’s also an upscaler, but unlike reconstruction methods like DLSS, it isn’t constructing missing pixel info when it upscales. It just scales it and then cleans the image up using edge enhancements and adaptive contrast. PS5 can use FSR, but the performance ceiling theoretically is 2x compared to ML SR (which is around 4x).

1 Like

Their “full RDNA2” news made it sound like something in addition to RDNA2 but not sure.

Correct. It is stuff MS added on top of what AMD offered them with RDNA2. MS has discussed it a few times now (DF interview, HotChips conference). As @pg2g noted too, Nvidia cards have ‘dedicated’ ML inference hw whereas XSX merely has inference acceleration hw.

1 Like

Ok I see. I wonder when we’re gonna start seeing it though, even if it isn’t as good as Nvidia.

1 Like

I’m guessing they want to not only get the tech in place but more importantly get it into the GDK in a super easy-to-use way so it become ubiquitous on the platform.

1 Like

Yeah the CUs in Series X|S support half precision FP operations or something like that which is necessary for tensor calculations like matmul, convolve etc. But RDNA2 GPUs do not have dedicated coprocessors like Nvidia’s Tensor cores. DLSS 1.x used to run on shader cores but since 2.0 they moved to dedicated Tensor cores.

2 Likes

Suppose you are willing to play the game at 1440p instead of 4k so that you can reach your desired frame rate. FSR will give you a better IQ compared to 1440p in this case but it’s short coming is that it only works on Edge. That’s where it falls short to DLSS.

There are many differences between ML/AI upscaling vs stuff like FSR.

But when it comes to end result : FSR only works on Edge like Anti-alising stuff. Whereas DLSS works on every pixel and the whole frame is upscaled (not just the edges)

FSR produces a result which is better then resolution it’s been upscaled “from” but closer to it.

DLSS produces a result which is closer to the resolution it been scaled “upto” and sometimes even better then that.

But then FSR is available on last gen consoles and RTX requires atleast an RTX branded card. So FSR is justified in its own case.

1 Like

DLSS gives results similar to downscaling 8K to 4K.

1 Like

ML is possible on PS5 or any other graphics card without int8 or int4 support.

It’s just that the lower the precision the ML could work with, the better the performance will be there per hardware.

If a ML inference works on int 8 then it can also work on FP16 as well. It will only be half as efficient or half as slow.

Between Series X and PS5… Series X could do twice the ML calculations based on int 8. And 4 times the calculations based on int 4.

2 Likes

X|S will use INT operations for most ML inference workloads, not FP. Nvidia uses INT8 for inference with DLSS2.x, which X|S have (they also have INT4, even lower precision). The distinction is that X|S do not have dedicated hw for ML. They do have hw acceleration, but it runs on the same CU resources other rendering tasks do, so there is contention there. On Nvidia cards, there are dedicated compute resources for ML workloads.

1 Like
2 Likes
4 Likes

Flight Simulator | Xbox Series S|X vs PC | Graphics Comparison & Framerate Test

6 Likes

Wow, really impressive showing on both consoles. SX is night indistinguishable from PC aside from the resolution.

Asobo did an incredible job all around.

2 Likes
7 Likes