The extent of the backlash to Nvidia’s unveiling of its DLSS 5 technology earlier this week has been quite extraordinary. The games industry is no stranger to angry responses to various announcements, but it’s not always good at recognising when such a response is simply a knee-jerk overreaction and when it’s a much deeper and more justified rejection.
In this case, Nvidia and a handful of advocates for the technology have tried to paint the response as the former – a knee-jerk reaction that will dissipate once people get used to the tech. That claim, however, is hard to square with the extent of the criticisms of DLSS 5, which are coming not only from gamers aghast at the (frankly horrible) examples the company showed off, but also from a large cross-section of developers who see the tech as a major step in the wrong direction.
Both of those critiques are quite fair. The examples of DLSS 5 which Nvidia chose to show off were terrible, very clearly subverting the artistic intentions of the original creators by applying filters to characters that notably changed their features and presentation. The resulting character visuals resembled the characters in the ads for AI slop pornography that have recently infested social media. The fact that these images went past countless pairs of eyes at Nvidia and nobody realised that people were going to hate this is honestly astounding.
I’ve seen the argument made that the problem was in the presentation of the technology – that applying this filtered look to existing game characters was a horrible idea that invited backlash. Sean Hollister over at The Verge made the most convincing version of that argument; and I do agree that the response would have been less aggressive if Nvidia had shown off a demo of something original tuned to use DLSS 5 in an impressive way, rather than yassifying existing games.
However, the real problem is more deep-rooted than the company’s poor choice of examples for the technology. Developers have been equally as firm as consumers in their rejection of DLSS 5, but often on quite different grounds. Developers certainly don’t like the idea of a layer of AI making visual changes to character models – but their criticisms also point out that the examples shown of DLSS 5 aren’t just slopping up the characters’ faces, they’re actually changing, and in some cases completely breaking, the lighting model of the whole game scene.
The focus on facial changes overlooks the fact that improved, naturalistic lighting is actually the big promise Nvidia is making with DLSS 5 – and people defending the technology tend to argue that developers will have the control required to stop it from messing with things like characters’ faces, while retaining the benefits to scene lighting.
That would be good if it was true; but in almost all of the examples Nvidia showed, the lighting is pretty terrible. Smoother and more naturalistic in some regards, perhaps; but compared to the original scenes without DLSS 5 enabled, the lighting is generally brighter and more washed out, scenes are flat and lacking in contrast, and characters often appear to be lit entirely differently to the scenery around them.
Now, maybe this is also a symptom of the awful decision to show off the technology on existing games rather than in a custom-made demo built to utilise it correctly. It feels, however, like a deeper problem with the whole technological approach DLSS 5 represents.
A deep learning model trained on a massive amount of image data is a pretty useful thing in many ways. The most obvious use is for upsampling low-resolution images in ways that predict new detail to add to the higher-resolution image – it’s not always perfect, but generally speaking does a pretty incredible job and has become an invaluable part of the toolkit for modern games.
Extending that functionality to actually change the underlying image – altering lighting or the rendering of certain aspects of characters – is quite a different story. It may work very well when applied to specific features that traditional rendering technologies struggle with (hair, for example, seems like an obvious thing to focus on), but giving it free rein over scene or character lighting runs into a major problem. These models work by averaging out over reasonably close matches in their training data sets, which means they’re very sensitive to what’s actually in those training sets, and are always going to have a tendency to push scenes towards being more “average” – flatter, brighter, less distinctive.
That’s exactly what we see in the images Nvidia showed, and it’s a major part of why so many developers are being quite open in their loathing for the technology. Building great lighting models that allow games to be atmospheric and distinctive has been one of the most important challenges for video game graphics for more than 30 years – not least because just like humans are very good at detecting unnatural aspects in facial models (the uncanny valley effect), we’re also very good at realising when something is off with a lighting model.
That Nvidia, a company built around creating the chipsets and tools that allowed for so much of that progress in game lighting, is now pushing an AI tool that seems to miss that mark by a mile as the next evolution of its gaming offering… Well, you can see why many developers, regardless of their stance on AI more generally, view that as a slap in the face for their efforts.
Of course, a major part of what’s happening here is related to the changes at Nvidia. The company has historically been a creator of 3D chipsets for gaming, but that hasn’t been Nvidia’s main business for some time. It has grown to be one of the world’s largest companies off the back of the AI boom, and is one of the biggest players in the suspiciously circular financial investments that pass between the major companies in that field. Gaming is a very small part of Nvidia’s business; the lion’s share is completely dependent on growing the demand for AI products and services.
It’s no surprise that a company primarily focused on AI would insist that the future of game graphics lies in AI generation tools and technologies. It’s also no surprise that it would react very strongly to the seeming rejection of those technologies by consumers and developers alike.
Nvidia CEO Jensen Huang came out swinging, telling a press Q&A audience that gamers are “completely wrong” about DLSS 5. That’s not a great way to defuse a consumer backlash against your new tech, but of course, from Huang’s perspective gamers are only a very marginal set of consumers.
His real consumers are AI data centres ordering thousands of Blackwell AI chips costing tens of thousands of dollars each; and while DLSS 5 is irrelevant to that business, the implication that consumers might really be firming up a dislike and rejection of generative AI technology in media is very relevant to that business. Viewed from that perspective, it’s no wonder he wasn’t striking a conciliatory tone with consumers who didn’t like what they saw.
It’s worth noting that this isn’t the first misfire for Nvidia’s attempts to introduce more of these types of models into games graphics pipelines. While resolution upscaling has generally been a success, frame generation – increasing a game’s framerate by creating AI frames to interpolate between “real” frames – has been far less popular. It creates a strange feeling of input lag and often looks really bad, especially in fast-moving scenes.
“It’s easy to see this feature simply becoming like the horrible motion smoothing setting that’s enabled by default on most televisions”
Frame generation can, at least, be turned off; it’s not entirely clear to what extent consumers are expected to have that level of control over DLSS 5 features, since Huang insists that they’re integrated into the game at a much deeper level than just being a post-processing filter. If they can be turned off, it’s easy to see this feature simply becoming like the horrible motion smoothing setting that’s enabled by default on most televisions (and which notably elicits a similar response to DLSS 5 in that it’s disliked by savvy consumers and absolutely loathed by actual directors and creatives in the film and TV industries).
In a sense, the real tragedy here is just how disappointing it is that this is the best Nvidia has to offer in terms of a next step forward for gaming graphics. We’ve been hoping for a major step up for years now; ray tracing has had far less impact than most people hoped, and graphical improvements have really become very slow and incremental in the past decade or so.
For all the advances being made with the silicon, Nvidia’s insistence that turning that power over to generative AI technologies is the best path forward for game visuals is an idea that feels born out of the company’s strategic priorities rather than out of actual consultation with developers. If this is the major focus for future Nvidia chipsets, we may be stuck on a technological plateau for quite some time.