How much of the RDNA 2 menu are upgraded games feasting from?

Glad to see DF reiterating what was said in that interview so more ppl see it. I will indeed be interesting to see what AMD has cooking and how it could potentially piggyback off of MS’s ML tech.

They have 2 things with similar names. One is just a smart sharpening filter and it’s already available, but they also said they are working with Ms on super resolution using DirectML and once super resolution ML is ready they will provide an easy to use implementation in their Fidelity FX suite.

Edit just saw the above video. So Ms is working independently here. Strange how it was worded in the rdna2 pressser then.

1 Like

You are not remembering incorrect.

It is Machine Learning but not tied to the GPU (int-4 or int-8 ML)

They have trained the video processor.

This is the best explanation.

5 Likes

If it was MS tech, you can be they will be the ones branding it with their own label. It’s one of MS’s fave things to do! :wink:

1 Like

I think you are wrong here still. What you described still requires computations in real time to be run somewhere on the console. ML trained the algorithm, but something has to run the inference of the algorithm on the actual game’s colors.

It’s entirely inside the video display controller.

3 Likes

Can you find me some info on this? The other video didn’t say that.

Na. I have mentioned it before. All the computation happen in video processor for auto HDR. It takes the SDR signal and converts it to HDR signal in real time.

It is ML or AI trained, but GPU doesn’t do any stuff here.

Anyways, looks like we have reach an impasse here.

Maybe someday we both can meet Jason in person at the same time and get it clarified.

Can you find more info because the other video didn’t offer the important details. There has to be something doing a computation to convert using the algorithm. I’m happy to be wrong but wanna get concrete understanding.

I’ll speak of how I understood how these folks explained it, but I’m not speaking out of experience or direct knowledge. It’s much like hardware compression or hardware-based audio effects (think of digital effect pedals). Hardware that is created to perform a certain task in real time.

Data goes into a circuit and comes out the other end looking a little bit different with minimal processing required, if any.

Non-real-time ML is used to come up with algorithms, very efficient ones, to translate what comes from the video buffer to the TV. These can then be encoded into hardware so that there is an instant calculation. So there’s no processing involved. There are practical reasons why you want a bit of processing, and with the metal doing most of the work it should be minimal.

You may have seen the GDC talk from one of Ubisoft’s developers that talks about how they are using non-real time ML to create highly efficient physics interactions that are incredibly detailed. So to do these highly detailed physics in real time would take way too much processing. What the machine learning does is it looks at all of the variables and inputs and simplifies it down to an easy to calculate algorithm that can be run in real time. In this case the game actually performs the real-time calculations across the CPU and GPU. It still has to be done this way because there are still many variables and possible outputs. But if we had a physics problem that is so common and used in many games, we could actually create hardware that performs an instant calculation on that specific physics problem if we constrain it to known inputs and known outputs. We have seen this in the past.

In this case where you’ve trained an AI to look at a ton of footage and you train it for only one purpose, you can conceivably create hardware that takes the results of the AI and codifies its algorithm into hardware. So it takes an image as an input and the output is an image.

I used to have a Sound Blaster card back in the day that had MP3 hardware compression. Because the inputs were known and the outputs were known, Creative could create an algorithm, implement it into hardware, that can be applied to any audio with no need of any processing.

With Auto HDR, you have a similarly understood input and output, and all it’s taking is frame buffer data and making some modifications to it as the data goes through the hardware.

I’m sure that I’m going to some obvious details that you’re already familiar with I’m just kind of going into the basics so that anyone else reading this knows what I’m talking about.

The obvious concern would be flexibility. Since we’re dealing with hardware you can’t release a patch that would change the hardware. Some aspects of the HW could allow for some flexibility, this flexibility could be certain parameters, maybe. but this is why in the solution like this I would imagine they want a little bit of processing (which if done under a microsecond, isn’t a big deal).

So when people claim that Auto HDR is performed in hardware, this is what I take it to mean. I originally thought that these games are being emulated and then the extra horsepower on the machine was using real-time ML in performing real-time HDR calculations, as well as frame rate boosts through some sort of temporal frame boost algorithm. Now I understand it to be AI being trained to come up with algorithms, that in the case of HDR, is used to create hardware that will facilitate it.

But in regards to how they’re doubling the frame rate on older games, or how the Xbox series S is upscaling to 4K… I have to wonder if they’re doing the same thing here. If the Series S is using a hardware upscaler, that would be pretty nice.

2 Likes