AMD FSR: Xbox Series S secret sauce?

Not secret sauce as FSR is available to the whole range of AMD GPU architectures including Polaris, Vega, RDNA and RDNA2. But I assume the quality will be best on RDNA2.

1 Like

No it’s not. It’s more like new way of doing maybe anti-alising

Exactly what my question was. Devs who are tech gurus can use their own soln, devs who aren’t would be using a middleware soln anyhow which all have better ‘super resolution’ options built in anyhow. I think the idea here is to have a baseline that is open source so that it improves significantly over time. But boy does it have a long way to go.

1 Like

This is open source so I fully expect PS5 and every other piece of modern HW to support this.

Nah, DirectML will be, this AMD technique seems to be a cheap answer to DLSS, but without the deep learning it would never be the same thing. This FSR seems to be the evolution of upscaling and checkerboarding.

Absolutely. No secret sauce at all.

Though my wish would be that Xbox implements FSR on a system level so BC games in particular could make use of it especially those who had to reduce resolution for FPS Boost. No patch needed would be a nice feat.


Do you think they’ll combine FSR and DirectML for upscaling next gen games or these two techniques are completely different things used for different purposes?

I would hope they develop a solution that is similar to what DLSS does. For me FSR and ML based SR are two different technologies with the same objective though. As said, I see FSR as a quick and dirty solution that could very well a system level kind of option for games, so they need to patch of any sort (all BC games for instance). On the other hand a ML based Super Resolution would require game related learning. This method I would only use for real next gen games. In result there is still time to develop it as we have not seen a real next gen title yet (on any platform).


Why do some of you assume this FSR solution from AMD is not machine learning? There where patents floating around about this or a similar topic from AMD and i’m pretty sure it used ML techniques.

I think you nailed it.

It’ll be great for smaller to mid-sized devs.

While the quality isn’t exactly there, it’s not too bad for the gains you get. They did claim that it’ll be highly optimized on their own cards and it’s safe to say that’ll be the case with the new consoles.

Despite the quality drop resulting in a bit blurrier image, I think this is exciting for those that want to hit a high framerate target like 60fps or 120fps with raytracing, high detail models, and a ton of effects (particles and so on).

Dirt 5 accomplished 120fps by aggressively changing LOD on models, reducing the shadow draw distance, and from what I gather reducing the quality of the texture filtering (could be wrong, I’m getting this from an article that simply called the textures blurrier) and ran at lower resolution. Oh, and remove a lot of trackside detail.

If a game can keep all of these things but have a “blurry” 4K output (which ultimately is still better than a 1080p output, it’s just not a sharp 4K) to get 120fps, that would be great!

Dirt 5 on console would be a great test for this, actually.

The other thing that would be interesting to see is if one used this to super-sample a high target output to 1080p TVs. That is, internally render at 900p, upscale with this to 1200p, to output on a 1080p screen. Would this result in a clean looking image?

A lot of uncharted territory here, it’ll be fascinating to see what developers do and their experiments with this, ML upscaling, and so on.

Look, Returnal rendered at 1080p, upscaled to 1440p, and then used checkerboarding on that to hit 4K. A lot of creative and unique solutions are already implemented in released products. Ratchet & Clank apparently has a performance mode with RT, who’s to say they aren’t implementing AMD’s solution? Raytracing at high framerates but a slightly blurry 1440p output?

1 Like

Ive said this from the start. They should have kept it at 1080 and leave room for the FPS goodness

According to AMD, FidelityFX Super Resolution is a spatial upscaling technique, which generates a “super resolution” image from every input frame. In other words, it does not rely on history buffers or motion vectors. Neither does it require any per-game training.


FSR differs from DLSS in other tangible ways as well. Nvidia’s DLSS relies on machine learning and temporal upsampling to drive its performance-boosting feature. The new Temporal Super Resolution coming to Epic’s Unreal Engine 5 also revolves around temporal upsampling (hence the name). FidelityFX Super Resolution utilizes spatial upsampling instead. Herkelman didn’t go into specific technical details about how FSR works under the hood, but typically, spatial upsampling has a GPU create a frame at a lower resolution, then renders it onscreen at a higher resolution, using interpolation techniques to fill in the blank pixels.


1 Like

Thats not a quote from AMD.

No those quotes are from publications that asked AMD for details and reported about what was said to them by AMD as a response. The reason I provided the links to the sources.

1 Like

Exactly, this was not said by AMD. Do these publications understand the topic completely? I have serious doubts.

Well, we will see the details 22.6. on GitHub.

it is 100% not machine learning because it can run on older hardware, whereas dlss need specific tensor cores to do the ML Ops. Its more like a souped up traditional reconstruction technique.

Machine learning has very little to do with tensor cores. These hardware blocks just accelerate matrix multiplications so it is practical to do DLSS in realtime on usable resolutions. You can do ML Ops on an old Amiga.

but can the old Amiga cant do alot of it, it would be so minute its practically not even worth it, rdna 2 gpus are capable of int 4 and int 8 ops which is very much needed for machine learning, if FSR is machine learning then the results are rather underwhelming

int4 & int8 are not a requirement but very beneficial for performance in Machine Learning and AI tasks (both ends). Reason those data type exists in GPUs now. The internet is full of articles about it.


ML SR doesn’t necessarily need game-specific learning. In fact, there should be nothing stopping it being used in BC since the system would have access to the frame buffers. Alex noted this on a DF Weekly recently. It could also be levered for upres’ing textures (not at runtime for BC games, but rather on the disc before the game even runs).

FSR looks real rough atm, but maybe since it is open source it will eventually get improved a lot.