DLSS, short for Deep Learning Super Sampling is an Nvidia tool that requires an Nvidia 20 series – or newer – graphics card that has tensor cores. DLSS is designed to increases performance by running the game at a lower resolution than normal, then using a neural network to increase the resolution again using tensor cores that are otherwise unused in the rendering process.
GPU architecture
Is primarily designed to render graphics for purposes like video games, however, there are extra features in the GPU processor for the 20 series graphics cards. The two main extra features are the RT cores, which are used for ray tracing, and the tensor cores, which are designed to perform machine learning tasks.
DLSS
With the original implementation of DLSS, developers had to explicitly enable support for DLSS in their game. Additionally, Nvidia needed to train its neural network for each game using a supercomputer. This process took a number of lower resolution images, then compared them to a single “perfect frame” generated through traditional supersampling methods. The supercomputer then trained the neural network to transform the lower resolution images to match the larger perfect frame. Once the processing was done, the neural network programming was included in the next graphics driver. This training process needed to be run for each new game, a design that was only sustainable due to the low number of games implementing DLSS.
DLSS 2.0
DLSS 2.0 improved on the process by removing the requirement for the neural network needing to be trained for each game. It also added three levels of DLSS, performance, balanced, and quality. These three modes were designed to allow the user a choice of how much performance boost they wanted and how much of a graphical hit they were willing to take for it. This design gave the user a lot more choice when compared to the single level of the original DLSS implementation which users often reported as sacrificing too much quality.
Did this help? Let us know!