Is DLSS and ML upscaling becoming redundant?

Ok, hear me out. Over the last couple of years we have been amazed by DLSS and the benefits to performance by using it. However recently I ahave come across a few things that make me wonder if the ML component is really that important in the actual upscaling part. I was watching a video talking about DLSS and the dude mentioned that DLSS has both the ML component and the Temporal reconstruction part, and that it is actually the Temporal reconstruction part that does the absolute vast majority of the work. I was watching some videos of Ghostwire upscaling comparing FSR, DLSS and TSR (Temporal Super Resolution). For the most part TSR was just as good, and in some areas better, as DLSS, and without the need for Machine Learning. I was wondering why AMD didnt jump on ML in a bigger way and add tesnor like cores to do it, and yet went with what was a pretty poor software upscaling in FSR 1.0. With FSR 2.0 coming out and with it being a Temporal reconstruction model now, and the far better results associated with it, does this mean that the use of ML with upscaling is going to die away? If the results with FSR 2.0 and TSR are even 90% as good as DLSS and then why would devs invest in the DLSS side of it?

DLSS and FSR both require the same input data basically, so if you support one you can easily support the other. That’s why AMD claims it only takes like 2-3 days to integrate 2.0 into titles with DLSS.

With the AI component you get better IQ in the finer details as shown by the Ghostwire Tokyo DF Video. I could be wrong but I also think it could help reduce artifacts when the input resolution is very low which is why you can see Death Stranding run acceptably at just 240p. I could be wrong but i dont know if TSR or FSR could produce similar quality.

Additionally theres tech like the recent DLDSR which is like the inverse of DLSS where it supersamples and downscales to the output resolution resulting in very good image quality at acceptable frame rates.

These can probably be done without AI yes but its an extra layer of computation that doesn’t seem to have much of a downside.

1 Like

Im thinking that devs by their nature are not only slow to adopt new technology (due in part to it requiring work to the engines and the long dev cycle’s for games nowdays) but also look to the easiest of multiple choices. There is obviously an element of time and cost associated with doing the ML training required of DLSS and now Intels GPU and the Xbox ML capabilities. With the addition of excellent Temporal upscaling solutions like FSR 2.0 and TSR, and with the improvements that will come to the algorithms to reduce the inconsistencies you mentioned, why would a majority of devs invest in what is a small pay off? Remember no consoles, or AMD cards, or the majority of Nvidia cards in the wild support DLSS. So why spend the effort for 5% of gamers with DLSS, for them to maybe get a 5% improvement over Temporal upscaling which they would all have access to?

But no dev has to do the machine learning themselves, thats NVidias part.

Temporal antialiasing has lots of artifacts. You have 2 options to clean this up. Invent algorithms that do this, this is time and labor consuming. Or train a neural network to clean this up, thats what DLSS does. The latter is cheaper, but needs hardware support to work effeciently.

3 Likes

There is no ML training required for DLSS or XeSS on the developer side. Hasn’t been the case since DLSS 2.0 i believe. It’s got a plug in for UE just like how FSR 2.0 will. Custom integration takes not much more effort than existing custom TAA solutions as again they rely on the same vector inputs to work.

Having a widely supported and open source option like FSR is great and AMD should be commended. But supporting native / better solutions for customers (aka your gamers with Nvidia RTX) will only lead to more sales and I can guarantee the ROI is worth dropping a plugin or spending a few days on integration. Plus you can be spotlighted or marketed by Nvidia for using it which always helps smaller teams.

AI/ML will only continue to grow and while this gen of consoles aren’t necessarily well equipped for it you could prob expect next gen to have dedicated silicon for ML like the tensor cores on RTX cards

2 Likes

If the timings for FSR 2 AMD gave holds true, it will actually be cheaper computationally wise than dlss.

If that’s the case and still inferior, it in theory, they will have headroom to improve and close the quality gap even further.

Not sure about that. Tensor cores occupy a relatively big area. It’s not a problem for Nvidia and their enormous die (and even so for Turing it represented not increasing the ALU count by all that much).

AMD went with just RPM for int types because Ms asked them to. And Ms defense for choosing this is that it’s enough to support ML workloads without sacrificing die area. (They made an argument that it becomes a Vertex/Pixel shaders versus unified shader).

Ms is also the reason AMD RT is more partially handled by compute than on Nvidia. Die space was also a big reason for that, but also because it offers more flexibility with data formats (remember that decision was made years ago without knowing how the future will go and knowing that they were planning on moving the geometry pipeline entirely to compute which could make some new geometry formats practical for use) and they didn’t want another Xenos tesselator situation.