DLSS Image Quality: they have been testing it wrong!
Now that the transformer DLSS model is out, I used DLSS Swapper to test in on Talos Principle 2, which is the game I play lately and is famous for its blurry Unreal Engine 5 nature. The transformer model made everything look so clear in motion, unlike the CNN model. The latter felt like motion blur was applied when moving the camera, only to clear up a few milliseconds after I stopped the mouse movement.
The image quality under motion is something I have never seen discussed or demonstrated properly in upscaler comparison videos, even in the really well made ones by Digital Foundry. Then, I had this idea… Let me introduce to what I call “the Stabilized Motion Test”.
The idea is to stabilize the video in such a way that the camera orientation remains unchanged. This way the pixels of each frame align with the ones of the next one making the comparison obvious.
So, I captured some footage with both DLSS models, processed them and uploaded them (along with the raw footage) in this Google Drive link. You can also see the comparison in this 1080p YouTube video below, but the quality due to compression will not be that good .
You can see the CNN model getting blurry the moment the camera starts moving (this is when the white dot starts moving). The statue and the tree branches lose resolution. There is also more visual noise and less temporal stability on the stone road. The transformer model on the other hand remains crisp. If you pay close attention, you can see it losing the ability to reconstruct the smaller tree branches, but what it can reconstruct remain sharp. The moment the camera stops, the CNN models gets progressively sharper.
Now, you might argue that if we have to go through all of that effort to demonstrate the difference, does this mean it’s not observable when actually gaming? And the answer is that your eyes and brain will tell you the difference. The transformer model does not tire you and even though you can’t see at pixel level when moving the camera like that, your brain can actually reconstruct the geometry edges much better without giving you a headache.
I also took some screenshots when still, when moving the camera and when strafing and uploaded them to imgsli: 1080p look,1440p look, 1440p strafe. As usual, please open the images in native resolution.
As you can see, the DLSS transformer model has much better clarity in motion. The CNN model struggles on the pebble stone road. It gets blurry and unstable. The tree branches and the statue on the center lose definition and it’s like seeing the actual native low resolution behind the upscaler.
I hope this topic gets more attention by the review community. Disocclusion fizzle, trails and artifacts are important, but motion clarity is how clear the game actually feels when gaming. We are not moving the camera in slow pace in “real” life.