Dr Tommy Thompson is the founder and director of AI and Games, and was formerly an AI researcher and university lecturer in game development. Tommy co-founded Game AI Events CIC in 2024: a non-profit organisation hosting events for game AI professionals.
This article is part of AI Week.
Trying to qualify where I sit in the space of artificial intelligence for video games is never easy. I’ve been in and around AI for over 20 years, and things have never been more complicated. I used to hypothesise in research papers that AI could have a real impact on the games industry. Sadly, I’ve been proven right for all the wrong reasons.
I now start every introduction with: “I work in AI. Oh, but not like that.”
Artificial intelligence and video game development are two fields that benefit from each other, while being poorly understood by those not involved in either or both. It’s this confusion and ambiguity that has driven much of my work at AI and Games over the past decade: advocating for interesting innovations, educating on emerging best practice, and holding us to account.
We have seen tremendous growth in useful, practical and meaningful adoption of AI in the games industry in the past 15–20 years. But of course, the headlines have focused on just about everything else.
The boom (and bust?)
Since GPT4’s launch in 2023, our news feeds have been filled with all sorts of pontificating about the “unrealised opportunities” generative AI presents. AI will be the great equaliser, a tool to make games faster and cheaper. Games will react to users in more immersive and engaging ways, crafting bespoke storylines, with characters that make the universal personal, the finite infinite.
You can immediately see how this is appealing to an industry obsessed with cutting costs and increasing recurrent user spending. These sunny uplands are but a handful of prompts away, while the very real problems that exist with a lot of generative AI are pushed under the rug, be it violations of intellectual property rights, the inconsistent and often fabricated outputs, energy consumption equalling that of major cities, lawsuits, and the cascades of slop now making the internet even worse than it already was.
I suspect you are sick of hearing it. Yet it is inescapable. New AI models dominate the news, while investors are asking studios or publishers their plans for “embracing” AI. More recently, the scarcity of DRAM supply brought about by the rush to build AI data centres has resulted in costs increasing and GPUs being delayed, and pretty soon consoles will bear the brunt of it as well. Does anyone think that the new Steam Machine will launch under $1,000? At this point, I’m doubtful.
I can’t and won’t sit here and defend generative AI as an industry: that is, the collection of big tech companies combined with start-ups such as OpenAI and Anthropic that shovel AI to the masses. This is a desperate attempt to make a mishmash of unreliable tools into a profitable commodity. With over half a trillion dollars invested so far, there is little to no reliable path to a return on investment (ROI).
Wearing both my “former AI researcher” and “games industry professional” hats, I can see these works from both sides. Google DeepMind’s Genie 3 – the AI system that recently tanked stock prices of games companies – is a significant innovation in what we call “world models,” a breakthrough thought impossible only a few years back. Yet donning my game developer hat, I can see that it’s largely useless. These models are astronomically expensive, struggle with consistency, and are incredibly limited in their fidelity. There is potential to some degree (see Microsoft’s Muse, which is a much more subdued effort with good intentions), but once again we’re told this will fundamentally change how games are made for the better, by people with little to no grasp of how games are actually built.
Vapid PR statements, corporate greed and desperation, combined with an increasingly vitriolic response on social media has meant there’s no meaningful discourse on AI. It robs us of having a real discussion about the true breakthroughs, and what a meaningful path looks like for AI in games.
It’s time the games industry regained control of the narrative.
Finding meaning in the noise
We have a long and storied history with AI, dating back to the very first non-player characters (NPCs) and procedural generation techniques in the 1980s. The golden age of “game AI” in the late 1990s and early 2000s brought us games like Quake III Arena, Half-Life, The Sims, Halo 2, and F.E.A.R., which standardised many of the techniques we rely on for NPC design to this day. Meanwhile, in the past 20 years, we’ve learned how machine learning (ML) can be embraced throughout production.
Historically, ML had little impact on gameplay, with only a handful of games such as Creatures and Black and White lingering in the gaming consciousness. But ML has proven far more valuable elsewhere, such as in Xbox Live’s TrueSkill matchmaking algorithms, which launched with Halo 2.
AI-driven animation was pioneered through Motion Matching, first in Hitman Absolution, and is now a staple in Unreal Engine 5. We use player analytics across the biggest titles in mobile and AAA. Candy Crush Saga developer King has created cutting edge ML tools in content generation, automated testing, and player profiling – only for them to be used as a convenient excuse for layoffs a decade after they started building them. Cheat and toxicity detection helps protect our online communities by observing text chat, voice chat, and even gameplay.
Ubisoft uses ML-powered bots to stress test updates to Rainbow Six: Siege and For Honor. EA trains ML bots for for QA tasks in Battlefield, NHL26, and EA Sports FC. Square Enix tests game balancing at massive scales with ML for its RPGs. The Snowdrop engine even has its own ML-powered GPU profiler to help support cross-platform development that fundamentally changes how games are being ported to consoles.
AI is not coming to the games industry: we are one of the leading sectors in embracing and applying this technology. So it’s disheartening to see how the AI industry has started dictating policy.
It’s not black and white
We’re at a crucial inflection point in the games industry: the market is volatile, thousands of developers are losing their jobs, and as a generation of talent is phasing out, new talent is struggling to get in.
So what better time for generative AI – a sub-discipline of ML that crafts “content” from statistical inference – to arise. It promises to be a silver bullet that can deliver almost instantly, replacing the need to hire talent, cutting costs on production, saving studios the need to scale up, bringing product to market faster, and reacting to and engaging with players dynamically.
Unsurprisingly, the truth is far more complicated. It’s neither the balm for all of our ills, nor is it (when used properly) a never-ending plagiarism engine. Much like ML before it, generative AI is not inherently evil: it’s about the intent and purpose of how it is designed, built, trained, and deployed. After all, I can train an LLM on my laptop using ethically sourced data if I know what it is I’m trying to do, and fine-tune it to my needs if the problem itself is precise and specific.
We already have a number of examples of good practice for generative AI in games if you know where to look. We see the likes of EA’s AgentMerge for consolidating redundant JIRA tickets, or Infinity Ward’s multimodal search engine for assets. Upscaling technologies such as DLSS are derived from generative techniques, enabling high-quality ports of Cyberpunk 2077 and Star Wars Outlaws to the Switch 2, or in the case of God of War: Ragnarok, reducing texture sizes on the PlayStation 5. In fact, the dev team behind Mass Effect Legendary Edition used a more primitive version of generative AI to do a first-pass on upscaling textures before artists cleaned them up by hand. Meanwhile, Twitch is using LLMs for improved toxicity detection. New studios such as BitPart, Lingotion, Meaning Machine, Raw Power Labs, and Studio Atelico are exploring how to craft new gameplay experiences using ethically developed and fine-tuned generative models that are typically running on-device, rather than in the cloud.
“Generative AI will never achieve the same level of quality as a human in any artistic medium”
As much as I agree with the incredibly negative – and justifiably angry – sentiment towards generative AI, we need to get past this two-sided debate, and as an industry start having a more meaningful discussion of where this technology is appropriate, if at all. Generative AI will never achieve the same level of quality as a human in any artistic medium, period. As far as I’m concerned, this is not open to debate. But like other AI approaches, there is value when we recognise the problems our people are trying to solve, we approach them with the right intent, and face them without compromising ourselves in the process.
It’s about showing resistance to what is often a desperate attempt to flog generative AI without purpose. At a GDC roundtable in 2024, when discussing the explosion of AI companies promising the Earth, I stated that “I know a hammer in desperate search of a nail when I see one” – a line that later resurfaced in Rez Graham’s fantastic GDC 2025 talk. I said it because I’ve seen all of this before, albeit at a smaller scale.
AI advocates – be they researchers, developers, investors, or business people – are highly prone to overestimating the competences of these technologies in real-world scenarios, even with the best of intentions. I would know, because I used to think the same way.
For every gain, where is the loss?
We need to have a more serious discussion across the sector of good practice in AI adoption. This should take the form of better education at developer, publisher, and trade body levels, sharing success stories and failures, and recognising where this tech simply does not work when it faces harsh production reality – a lesson being learned the world over right now in almost every industry.
Plus, for every gain proffered by generative AI, we must consider the losses incurred as a result. What does it say about us as an industry if every artist, designer, writer, voice actor, performer, community manager, and even programmer is worried that they’re going to be replaced by subpar automation?
It’s not going to be an easy process, given many of the big AI players also have significant control over games studios and publishers. But neither can we sit here and have another year of nothingburger PR statements, slop creeping into AAA products (having somehow slipped through review, again), and a chipping away at the collective morale and livelihoods of an industry that just wants to get on with making great games.
“We need to take the narrative back, and start showing a path forward that highlights how we can use AI responsibly”
It’s one of several reasons my colleagues and I founded our own conference in 2024. We need to take the narrative back, and start showing a path forward that highlights how we can use AI responsibly, before it comes back to hurt us – be it through legislation, or in the eyes of consumers.
Almost two years ago, I took to the stage at the London Developer Conference and delivered a talk stating that players are the true litmus test for generative AI’s future in our products, and that our failure to show value to consumers will lead to anti-AI sentiment. I argued we’d see players rally behind “AI-free” games, like one buys organic vegetables. Thus far, we have failed spectacularly, and while some in the audience thought I was a bit mad, the indies have proven me right. That negative sentiment is universal across demographics, and will only do more harm than good in the long run.
Ultimately, we collectively need to be working harder to distinguish the good from the bad, to educate and communicate where and how we use these technologies, and how to do so without compromising ourselves in the long run. But equally, we need to have a long hard look at the situation we’ve got ourselves into as an industry, if generative AI is being offered as the solution.