This article is part of AI Week.
Daniel Griliopoulos (left in the above picture) is a journalist turned games writer, who has worked on titles like Total War: Warhammer 3, Rimworld: Anomaly, and Tides of Tomorrow, as well as co-authoring the book Ten Things Video Games Can Teach Us About Life, Philosophy and Everything.
Thomas Keane (on the right above) is the co-founder of Meaning Machine, which is developing the AI-powered game Dead Meat, where the player interrogates a murder suspect by asking them anything they like. Previously, he developed the voice-controlled adventure game Unknown Number, and has worked with the immersive theatre company Punchdrunk.
GamesIndustry.biz paired them together to debate what kind of role AI should play in game narrative – and the results were fascinating, leading to a deep discussion over how AI can be ethical, where the lines should be drawn, and how individuals within the games industry need to engage with AI use in order to drive its direction.
What’s your current thinking about AI?
Daniel Griliopoulos: I think AI in games has, like most technologies, real ethical problems. Power, plagiarism, and job losses. I think that there are ways of moving past each of those three, but most of the tools that we use nowadays don’t.
There probably is a way towards an ethical version of an AI, but I need to see someone actually try. And the people who tend to be enthusiastic early adopters of this stuff don’t tend to be very concerned with morality.
I think the reality of the situation, and I’m sure Thomas will agree with me, is that it is being used out there: the vast majority of people you speak to outside our space are already using it.
Thomas Keane: Anything bad you say about AI, I almost certainly agree with you. But at Meaning Machine, our work is about trying to lean in and find an ethical and creatively valid solution for this. As you say, it’s here. We can’t just back away from it and go, I don’t want any part of it.” As a creative community, we can define what we think is good, what we think needs to happen, and lead that conversation, take it away from those who don’t understand what we’re doing, and carve that new reality for ourselves.
One of the biggest concerns I have is we’re abdicating responsibility around this topic. AI is here, so how are we going to use it in the right way?
Let’s go straight in with the big question: What role should AI have in narrative?
TK: It’s very clear that AI, when left to its own devices, creates soulless incoherent slop. That is true with images, that is true with story, that is true with everything. The only thing that can drive and create quality is the human hand.
What everyone at Meaning Machine is especially dedicated to is how we can use this technology in a way that enables the human hand to express itself in new ways. And as a result, we don’t use AI in production. This is not about shortcutting, making things faster or cheaper. We use AI as a runtime technology that powers new mechanics and new types of storytelling. And it’s a new type of storytelling that is characterized by unprecedented flexibility and adaptability, but also by unprecedented player freedom and self-expression.
Do you agree with that, Daniel?
DG: Do I think it enables new experiences? I think generally, design enables new experiences. Do I think the human mind is the only way of creating valid experiences? Probably not.
I don’t necessarily think that human vision is any better than AI vision in terms of plagiarism. I mean, the very language we use is a plagiarized language. We learn it from context, we learn it from other people. All of our art is built on the shoulders of giants. We’re kind of copying machines.
To me, what Thomas is doing has elements of coherency and elements of incoherency. Refusing to use AI in voice production, but being happy to use it in text generation, the same ethical problems of power and plagiarism are still there. They’re still trained on plagiarized models.
AI opens up new possibilities. At the moment, I’m not sure that they’re much more interesting than curated content or hand-designed content. If you want to have a realistic open world, you could populate it with millions of NPCs. But we don’t do that, because we want to have a curated pathway through an experience for players, and we mostly hand-create that, because there’s no other reliable way of doing it. Not because it’s necessarily more ethical, but because we end up with a better result for the players.
What do you think, Thomas? Would you agree with that?
TK: I fully agree that if you ask an AI to write a story, the story is deeply worse. I mean, our whole thing is death to AI slop. This technology is interesting because it is very adaptive and mutable, and can understand human language, which is one of the most flexible ways of interacting. But at the moment, it’s a world away from where it needs to be in terms of quality of art.
That’s what Meaning Machine is focused on: how do we close that gap? By infusing everything that’s generated with handcrafted content – not content that is copyrighted.
If you ask an AI, “What do you think of the weather today?” it will give you the most generic response. But our system is designed to bully AI into doing something interesting, by infusing it with a hyper-specific bit of handwritten content that’s been created by our writing team.
So, for example, when you’re asking about the weather, you’re linking it to a specific memory about, I don’t know, the time when as a child you sat across a table from your mother as she got steadily drunker. There is this collision between something very flexible, and something very specific that is human authored, and that’s where I think you start getting something really interesting.
In that context, we should talk about constraints. We’ve seen examples of AI NPCs that are able to talk about anything, but is that necessarily a good thing?
DG: The problems at the moment, and they may be temporary, are that the quality still isn’t good or reliable enough. The characters can be prompted to break the frame, to start talking about something they shouldn’t know about. So constraining them reliably is difficult, and the quality is just not there when compared to hand-curated content.
It sounds like Thomas’s stuff does that better. The question is, is it good enough yet to compete with a human who’s done this for 20 or 30 years, who can be quick and right first time?
You’re giving people the agency to go and ask questions of the AI, to have it produce a unique story for them, but that still isn’t as good as a handcrafted story made by somebody else who knows what they want to tell you.
So Thomas, is it that the quality of AI is just not good enough?
TK: I think AI by itself is not good enough, but I would say through our system, it’s as good as the handcrafted content, plus it has flexibility. We’re only interested in this technology when you can infuse emergence with handcrafted content.
I think what really matters is, do players like it? Are they interested in it? Do they want to play it more? Do they get something out of it that is unique? But the other thing to point out is there is no part of me that is suggesting this replaces old forms of storytelling. That would be absurd.
I think what characterizes games is that there is lots of opportunity for new formats and new types of experience. And I think just as much as someone might enjoy a really linear experience, people also enjoy the tabletop RPG experience, where a group of players can express themselves in any way they want. Neither is better, they’re just different.
We are long-term partners with the University of Bristol, and over the last three years they have been systematically interrogating our demos, our technology, independently. And what they have found is that players respond very well to them: they get a sense of creative freedom, narrative immersion, and personal gratification.
Can you get those in other ways? Sure. Does it invalidate other things? Absolutely not, but it shows there is value here.
Let’s look ahead to the future, because I’m interested in how AI will find its way into writing and narrative within games. Will we see, for example, a hybrid mix where a writer does the main content and then an AI is brought in to write barks or incidental dialogue?
DG: There are games like Caves of Qud, which have that handcrafted core storyline, and they already have procedural content around the outside. Obviously that’s not generative AI, it’s generated by humans, but procedural content is something that’s amenable to being expanded by AI.
I would say that at the moment, it’s too toxic in the main game space to do that. So the side spaces where people could get away with doing that at the moment are VR, mobile games, I imagine some MMOs – places where there are less ethical constraints for what people do. But I think in the indie space, it’s going to be a while before anyone starts experimenting with this, probably because it’s so strongly condemned. That’s not me condemning or condoning, that’s just me saying, “This seems to be the standard at the moment.”
How about you, Thomas? What do you think?
TK: Well, I strongly think that video games are characterized by aggressive reinvention, and that’s what makes it such an exciting space. And I basically see AI as playing a role in another spur of radical reinvention: not eradication of anything, but reinvention of what games look like and what they feel like.
How much AI gets infused with mainstream games is one of those things that we’re finding out: I think inevitably it will be infused at a certain level. Whether it becomes the dominant force in a game is going to be dependent on the game.
But I just wanted to touch on something that Dan said about the indie space. We’re one of the few people in the space that go, “We’re doing AI and we’re trying to do it right,” so we’re a bit of a magnet for people who secretly want to do the same thing, but are afraid of saying it. There’s a host of people across the board, reputable names that are really interested in this as a space, but right now are experimenting behind closed doors. There’s this big period of experimentation that will inevitably bubble through when people start to feel that they have hit the quality bar.
DG: Yeah, I think bigger companies feel like they can get away with it more easily. And we’ve seen things before like in-app purchases, loot boxes, and free to play being a thing that was abhorrent and only on mobile, and then over time it came to PC, and it came to console, and people were OK with that eventually.
It is notable that the core gaming space tends to try and stay away from those things. Those people still want to buy a game at full price. They still want to know that it’s hand curated. They still want it to be pretty and shiny and not have mobile game-type compulsive loops in it. So there’s a space that’s going to stay with that kind of older, traditional style of making games, and there’s a space that will be outside of that.
I guess the elephant in the room is that AI is cheap, right? It’s cheaper than hiring a human.
DG: Well, I’d argue it’s a hidden cost: the same way that it’s cheap to drive a petrol car compared to an electric car, but then there’s a negative externality that’s destroying the environment. There are cheapnesses, but somebody else is paying the price.
“We actually see the birth of AI and video games as not the death of the writer, we see it as the era of the writer”
Thomas Keane
TK: I fully agree with that. The supply of AI is a whole other thing. We don’t need models to get better: We already have a level of intelligence now in models that can transform video games for the next 50 years. We don’t need to keep going, we need to learn how to use them correctly, and to do it on device, so that we are using the same amount of power as high-end graphics.
Also, in terms of cheap, that’s not our experience. On a practical level, when you develop a game which is about bullying AI with handwritten content, you need writers to write a lot of content. We actually see the birth of AI and video games as not the death of the writer, we see it as the era of the writer, because the written word is the deciding factor in what makes something good or not good. A volume of high quality content that is true to an authorial vision is the fundamental competitive advantage. And you need to pay good humans to do that. So we would never propose using AI as a way to make things cheaper, because that’s not our experience.
Well, I was going to ask your viewpoint on that, Daniel, because what Thomas is doing is getting people to write stuff behind the scenes, but of course that then gets transformed by the AI. How would you feel about your writing being used that way?
DG: There’s different elements. In games we now differentiate between narrative design and writing: it sounds like we’ll still be getting rid of writers, because writers do all of the end product stuff, but narrative designers will be hanging around, or will be more important. It feels like we’re getting towards a point where stuff is authored, but not written.
“I don’t want to get Midjourney to make me a piece of art, I want to paint the painting”
Daniel Griliopoulos
I still prefer to do the boring bits. I love writing barks, I love writing item descriptions, I enjoy the process of writing and the individual moments of problem solving. So it’s not a thing you would get most authentic writers to do, because we love writing. We don’t want to curate a machine to do the writing for us. We want to make that end product ourselves.
I don’t want to get Midjourney to make me a piece of art, I want to paint the painting. That aesthetic, artistic motivation is a bit different from the motivation to make this end product, to make an IP, and there’s a corporate side that will be much more interested in that, though they won’t be so interested in it costing more. So there’s the horns of a dilemma: If you want to make these new experiences, it sounds like from Thomas’s description, it’s going to cost you more than just hiring a writer to make you a linear experience.
TK: It’s not costing more. It’s the same cost in different ways, and it’s a different type of experience. And I think there are studios who do find that interesting, in the same way that the leap to 3D cost a lot in terms of upskilling, but it did actually enable a broad reach of new types of experience. And over time, those experiences get streamlined and made more efficient.
Let’s have some final thoughts. We touched a little bit earlier on ethics. Can AI in games, and specifically narrative, ever be ethical? And if so, how?
DG: I think there is a route, but we aren’t there yet. If there’s reform to copyright, that may shift it. There may just be legislation which says this is OK, or it could be that companies go back and retrain AIs only on non-copyrighted stuff.
That works with the copyright problem: with the theft problem, it could be that it pays people. Some people pay VO people to have the models of their voices for their games: it’s like your copyright being bought out as an actor or a musician.
In terms of power usage, it’s only creeping up at the moment. But as Thomas says, if we can work out ways of making models that work on device, models that work locally, that don’t have this constant network traffic, great.
The final one is loss of jobs. All technology shifts come with loss of jobs. I’m one of the people most likely to be affected by the thing, and I’m seeing lots of copywriters getting out of work, because writing stuff that matches a generic template is something AIs do better than people.
Unfortunately, that’s not an industry thing, that’s more of a government thing, helping people get through job transitions. I would rather we didn’t have that, but that is a normal technological change that’s happened so many times.
If we can get through all of these things, if we can get through the transition, if we can get ethical power usage, ethical use of copyright or mitigation of copyright concerns, then there is a model there, and it could be useful, and there will be new game designs that come out of that.
How about you, Thomas? How do you feel about that?
TK: I 100% agree with everything that was just said. Basically, I think legislation has to be radically rethought. People need to be fairly paid. Compensation is a critical question. Power consumption is a critical question. I think with power, on device is the answer: this arms race to train and train and train is, as we know, highly destructive.
I think for me, the stuff that I can control – so beyond legislation, beyond power consumption – is about trying to carve out a role for humans in a future where AI is a serious force. And on an ethical level, that is what we’re doing: we want to lean forward and show the value of the creator, even when AI is fully enabled.
I think if we do that, we create the most powerful and most compelling case against the elephant in the room, which is that there are some boardrooms that would like to eradicate their workforce. And that isn’t an outcome that any of us are interested in, and it is not going to benefit any of us. We can either go, “Well, I don’t like it, I’m out,” or we can lean in and go, “No, we do have a role, and this is how we add value.” And that’s really what we’re here to do.
DG: I agree. There’s an ethical imperative to engage, because if you don’t, the worst people in the world will.
This interview has been edited for length and clarity.