One of the key features that Nvidia announced with the release of its 10 series graphics cards in 2016 was called Simultaneous Multi-projection or SMP. At its heart, SMP is a process that reduces image stretching when using multi-monitor configurations, but it also has advantages for VR gaming.
What does simultaneous multi-projection do?
When playing games with multiple monitors, most gamers angle their monitors so its easier to see the outside edge of them. For triple monitor configurations, this generally means that the outside monitors are angled in. Video game engines, however, are unaware of the angles of each screen.
This lack of awareness means that video games treat multi-monitor configurations as a single flat large monitor. This leads to the resulting image appearing stretched the further you get out from the centre of the monitor configuration. SMP allows the game to be aware of the angle of the monitors, it then adjusts the output to appear less distorted and matches what you would actually see if you were physically present in the game world.
It can help to visualise each monitor as a transparent window into the 3D video game world. If the layout of your screens matches the layout the game is expecting, then there is minimal distortion, but if you angled one of those windows, and the view stayed the same, it would look wrong and distorted. this distortion is most obvious when the camera is moving from side to side or when straight lines cross from one monitor to the next. What should really happen is the view out of the window changes to match the new angle.
SMP does this by creating multiple viewports and angling them to match each monitor. This process of creating new viewports comes with minimal performance impact over a traditional three-screen configuration, despite the increased effective field of view. This is due to the fact that almost the entire scene is rendered in a single pass. All geometry, lighting, and shading tasks are performed once rather than the three times that would normally be required. The only part of the process that needs to be performed once for each scene is the rasterization, which is the process of mapping values to individual pixels on the monitors.
How does it apply to VR?
SMP can also be used in VR in two variants called Single Pass Stereo and Lens Matched Shading. Single Pass Stereo makes a second viewport, but instead of angling it differently, it shifts the viewport along the x-axis, to match the focal point up with the positioning of your eyes. This allows the same speed up as with Simultaneous multi-projection as most of the graphically intensive processing, such as geometry and lighting, is only processed once rather than twice.
Due to the shaping of the monitor and lens required for VR headsets to work, a fair amount of pixels are rendered that simply aren’t displayed on the screen. Lens Matched Shading uses the concept of multiple viewports to split the shape of the rendered VR view into a square of 4, 9, or 16 viewports. The shaping of these viewports is controlled in such a way to reduce the area of the game that has to be rendered but ends up not actually being displayed on the screen. By cutting down on the number of pixels that are rendered that don’t need to be, Lens Matched Shading further decreases the performance impact of VR.
Lack of implementation
Unfortunately, SMP is a feature that needs to be built into a game by the developer. Due to the small percentage of player bases that use triple monitor configurations and concerns over the competitive advantage players might have with ultrawide fields of view, the vast majority of developers do not implement SMP in their games. In fact, only a single game, the racing simulation game iRacing, is known to have implemented the feature.
Implementation in VR games is more widespread with the inclusion of and updates to both Single Pass Stereo and Lens Matched Shading in Nvidia’s VRWorks package.