The question of whether, where, and how to use generative AI in game development is one of the most controversial issues of recent years.
Engaging with the topic has the feeling of pressing your hands against a stove you already know to be scalding hot. There’s no position you can take that won’t attract the ire of those who consider AI to be an ethnically and morally bankrupt scam, those who are burning with FOMO terror of being left behind by a genuine technical revolution, or both of the above.
Lewis Packwood, of this parish, noted after Gamescom that the use of AI in various aspects of development is already widespread across the industry, albeit often kept quiet to avoid a furious consumer backlash. As he pointed out, however – and Bryant Francis at Game Developer explored in more depth this week – the question of whether AI actually speeds up development, and to what extent, remains up in the air.
The morality of AI and the immense amount of stolen work on which it is trained is both a personal question for each individual, and a much larger question for courts and legislators. The practical question of whether it even works as claimed, though, is an important one for those running businesses, especially those feeling those FOMO pangs keeping them awake at night.
In an environment where many studios really don’t want to risk their use of AI becoming public knowledge, however, there’s a stark lack of comparative case studies or emerging best practices – an information blackout in which, I fear, some snake oil salesmen are gleefully setting up shop.
The problem AI purports to assist with is, after all, a truly existential issue for many studios. Lots of companies are struggling with development cycles that are growing out of control – an immense problem given that the industry’s business model generally means you don’t make any money until you launch (Early Access models aside). That’s a hell of a tough thing to handle financially if your development cycles are growing past the five-year mark (and in some cases heading towards the decade line).
Finding any way to wrangle those timeframes back under control is a key focus for a lot of studio heads. It’s only natural for them to be receptive to a technology promising to massively boost productivity across the board for everything a studio does – code, art, animation, sound, you name it, AI companies claim they can speed it up.
The problem is that while it’s clear that AI can be very useful in limited, narrow use cases, as a tool supervised by a human expert, those cases are a long way from the ideal being sold by the AI companies themselves, of autonomous agent tools delivering gigantic productivity boosts.
Consider the programming side of things. Code is arguably the ideal field for generative AI, since while it’s a very highly skilled and knowledge-driven industry, many of the tasks involved are inherently repetitive. Much of a coder’s time is spent either repeating patterns from their own prior work or seeking out solutions to problems other people have already tackled before them.
It’s not reliable enough to be let loose to do things on its own
It’s unsurprising that generative AI, which is essentially a huge pattern matching system that figures out what’s likely to come next based on what it sees before, has many genuine uses here.
A skilled programmer who’s already expert in their field can absolutely use AI judiciously to speed up their output, essentially treating it as a glorified autocomplete that’s just smart enough to be able to save a lot of repetition and boilerplate typing, as well as generating reasonably good function documentation, among other things.
This is how most skilled developers are using AI today. It’s not reliable enough to be let loose to do things on its own, but it can save time and let you iterate faster (especially in the prototype phase, in which some bugs or inefficiencies aren’t a total showstopper), as long as a skilled coder carefully supervises its output.
That’s clearly a useful thing. But it’s on an entirely different planet from the promises being made by AI companies to try to convince studios to make AI central to their workflows. Agentic AI being given free rein over entire codebases and completing tasks that used to need a human in the loop does not seem like a realistic paradigm for game development (or, honestly, for any kind of development beyond hobbyist projects, which are the only field in which this kind of “vibe coding” has any actual value).
So, while skilled coders might increase their productivity by a moderate amount (how much is debatable; some recent research suggests that while coders feel their output increasing, their measurable productivity gains are actually negative due to the amount of time spent squabbling with the AI’s weirder and less helpful impulses), the actual bottleneck many studios face remains in place – they still need to hire skilled, experienced coders, who are always expensive and often not easy to find.
The same essentially holds true for artwork and any other field. Generative AI might find some uses in terms of prototyping, and speed up that part of the process, which is where most studios seem to be experimenting with it now – churning out rough, AI-generated assets for prototypes and placeholders.
This isn’t nothing, since a lot of projects sit in a development hell loop of endless prototyping for years, and being able to jazz up the quality of your demos and prototypes can help a lot in seeking funding or partnerships. However, the consensus seems to be that the assets produced by AI just aren’t consistent enough or of a high enough quality to be included in shipped games.
Again, hobbyists (“vibe artists”, I guess?) are making things a bit more confusing. They turn out individual pieces of high-quality-looking art, which is enough to convince non-experts that AI is capable of replacing actual 2D and 3D artists.
But for a studio trying to ship a high-quality game, it’s just not acceptable if your character’s number of fingers or teeth fluctuates wildly from image to image, if a generated animation oops-forgets the existence of arm bones for a couple of frames in a walk cycle, or if your generated 3D model collapses into a mess as soon as you try to apply level-of-detail calculations to its weird, janky mesh.
As with the code situation, the productivity benefits here are really debatable, not least because of the impact of trying to turn your artists into emergency fixers for broken AI-generated art instead of, well, actual artists. That’s understandably a task they’re far less motivated and interested in than they are in actually making things by themselves.
Studios still need to hire and pay skilled artists, because fixing broken assets is hard
Generative AI doesn’t fix labour shortages here either – studios still need to hire and pay skilled artists, because fixing broken assets is hard, and often harder than making the asset from scratch.
The problem that studios want to fix is simple – the skills required to make modern games are extremely valuable, and it’s hard to hire for these roles.
Skills shortages have been part and parcel of the industry for at least as long as I’ve been around; initiatives to try to solve skills gaps in the UK games industry were one of the first topics I wrote about when I started working on a trade paper all the way back in 2001. Even after the thousands of layoffs across the industry in recent years, skills shortages are still being felt keenly in many areas, often due to mismatches between the skills and locations of those on the job market, and the needs and locations of the companies hiring.
That’s what makes the GenAI pitch so appealing to studios. It glibly promises to solve the skills problem at last, and non-experts can even see it working on a small scale – autocompleting a code template, or turning out an impressive looking bit of concept art.
Combine that with the expansive yet dubious promises of companies whose entire existence is predicated on showing enough growth to attract fresh billions in funding, with little consideration to customer satisfaction in the medium or long term, and it’s enough to convince many people.
Failure to understand how poorly this technology scales to larger, more complex problems requiring high levels of consistency and understanding will, I fear, prove fatal for some early adopters – sinking some projects, and possibly even some studios.
AI will be more limited and of lower impact than its evangelists wish to claim
Consumer backlash is the lesser of the risk factors in these cases, because if you’re determined to believe in the alleged miracles AI is promising (just one more data centre bro, I swear, just one more petabyte of stolen intellectual property, we’ll get working agents for sure bro), there’s a good chance you’ll be very deep into the rabbit hole before it becomes clear that you need to bring in expensive, hard-to-hire experts to fix the mess that’s been made of your codebase and asset catalogue.
The genie isn’t going back into the bottle, and AI is going to find a place in development – but it will be more limited and of lower impact than its evangelists wish to claim, and some studios are going to have to learn that the very, very hard way.
After decades of wrestling with skill shortages, I can sympathise with those who don’t want to look this gift horse in the mouth – but they’d do well to remember that there’s another saying about things that seem too good to be true, and how those generally turn out.