Question mark What:
AMD will officially unveil their new line of Radeon RX 6000 video cards featuring the RDNA2 architecture. The 25 minute YouTube premiere will showcase games from 3rd Party Developers.
The big one: +50% perf/watt over RDNA1 and +30% bump in clocks.
All new RDNA2 architecture with no left over GCN.
Infinity Cache. Still 7nm. Of course PCI-E 4.0.
RDNA2 should be competitive to Ampere, maybe better in rasterization, Turing levels (or better) of RT.
RDNA 2 Architecture
DirectX Ray Tracing
DirectStorage API Support
Sampler Feedback Streaming
Mesh Shaders
Variable Rate Shading
DLSS alternative is Super Resolution, but won’t be ready for launch
RDNA 2 Architecture
DirectX Ray Tracing
DirectStorage API Support
Sampler Feedback Streaming
Mesh Shaders
Variable Rate Shading
DLSS alternative is Super Resolution, but won’t be ready for launch
Love they are creating tons of ready to use implementations for the features, including rt implementations that falls back on other tech to reduce performance impact and increase visual quality.
Will be a huge help on consoles.
They also working with Ms for direct ML super resolution is super good news.
It’s actually more than 50%. That was the goal for RDNA 2, but the 6800 or 6800 xt represents 54% and I think I saw it said either 64 or 65% better perf/ watt for 6900 xt.
Yeah, still going to be using the Nvidia GPU in my PC build. AMD processor with Nvidia GPU. I am very interested to see performance of next-generation consoles with RDNA2, the PS5 and Series X.
AMD really has closed the gap (for the moment) to Nvidia.
Let’s see if Nvidia just reacts with another “RTX 2xxx Super/Ti” Move.
I assume availability will be much better for AMD than it was for Nvidia too.
I’m actually very surprised by what AMD was able to achieve compared to last gen. Hell, if the 6900 is even a few percentage points off of their benchmarks, they’re still going to be in 3090 territory for 66% of the cost and likely far more availability. People underscore how much production scale matters and they’ve increased a considerable amount to meet the rise in their demand beyond consumer hardware (supercomputing and data centers, especially).
Apparently the Raytracing performance isn’t as good as Nvidia’s cards from what is coming out now. Not much of a surprise with the Tensor cores dedicated on the 3080 and such. It will be interesting to see what happens in the long run there …
It’s interesting that the Series X is clocked at the base 6800 card while the PS5 is clocked at the XT cards (both 6800 and 6900). I’m speaking to the base clock and not any boosts.
Is there a reason why the XSX is clocked at 1.8 GHz and not 2GHz? Does it come down to cost? Cooling? Power usage?
The performance is basically the same as a 3070, some will favour the 3070 while others will favour the 6800, it will be interesting to see if the vram makes a difference later down the road?
With the way the consoles are designed vram + ram may become less important on PC and io + ssd will become more important.
Indeed, but the 3070 will likely have more longevity regarding Raytracing and then theres DLSS.
Im just not seeing the extra $80 worth of value on the 6800 but hey I could be wrong.