basically, it’s this. There have been mainly two paradigms of rendering, Rasterization and Ray tracing (or more broadly, Path tracing).
In simple terms, rasterization is a highly gimped version of rendering. Unlike real life, it isn’t shooting light rays and calculating bounces, as a result it suffers a lot at reflections and dynamic changes in lighting which is why it requires use of approximations and workarounds to get a fairly realistic look (most common being light probes).
But rasterization is not built around accuracy, but speed. This is why games and real-time applications have always been built on rasterization and will continue to do so. You dont’ care as much about accuracy as wanting 30 fps or 60 or 120.
On the other side, there is ray tracing, which as the name suggests, a bunch of light rays are actually shot onto the scene and bounced around, which is more in line with what happens in real life when you cast a light onto something. This makes it more realistic, as the shadow areas are not completely dark because bounced light fill them up to a degree. Because it’s aimed at accuracy, it’s incredibly time consuming. This is why ray tracing is the primary form of rendering in vfx, archviz and other industries aiming for accurate light bounce representation (also referred to as offline rendering in these industries) over real time applications like rasterization (online rendering).
Now, why the sudden interest in ray tracing for games?
Real-time global illumination has been one of the main goals of game rendering, and raytracing is one of the prominent ways to achieve it.
Because ray tracing has been the domain of accuracy over performance, games have historically just evaded using it. A few years ago, Nvidia would release RTX cards which had dedicated hardware to specifically do raytracing tasks and boost its performance to a great degree, to the point of doing the ray tracing in real time. But because they had to calculate light rays, they had to develop several algorithms to counteract problems that come with raytracing, most common being noise.
Despite the marketing, RTX doesn’t kick out rasterization in favor of RT. Instead, it’s a hybrid of rasterization (for the majority of rendering work) and just uses the RT cores to add raytraced reflections and some lighting perks to make the game look more realistic. This creates a lot of noise, so they developed very clever denoising algorithms to clean up the mess those raytraced rays create.
Raytracing was bound to come to games, it’s just a natural evolution of game developers wanting to leave behind the above mentioned excruciating optimization tasks, light probes, baking and other shit that rasterization has always needed to do. Rasterization techniques will still have a mainstay, but with raytracing, real time global illumination is easier to do, and more dynamic.
Now with all that said, The Medium is gonna be using Microsoft’s DXR (DirectX Raytracing), which is Microsoft’s backend for raytracing, much like OptiX is for RTX cards.