In some regards, the past couple of weeks have felt rather reassuring.
We’ve just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
It all feels like old times. It’s enough to make you imagine that while change is the only constant, at least it’s we’re facing change that’s fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week’s headlines, though, was something that’s likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
A more likely tack on Midjourney’s side will be the argument that they are not responsible for what their customers create with the tool
I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so “no we didn’t” isn’t going to be much of a defence strategy here.
A more likely tack on Midjourney’s side will be the argument that they are not responsible for what their customers create with the tool – you don’t sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.
If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn’t be able to include copyrighted material into training data sets without licensing or compensation.
The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you’re an individual or a smaller company, it’s entirely the Wild West out there as regards your IP rights).
In the process, though, they’re essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
That it’s stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
Consequently, a lot of what’s happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it’ll be harder to regulate a technology that’s genuinely integrated into the world’s economic systems than it is to impose limits on one that’s currently only clocking up relatively paltry sales and revenues.
Keeping an eye on this is crucial for any industry that’s started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry’s biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
The legal threat now hanging over the sector isn’t new, merely amplified. We’ve known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else’s copyright with it, but generally can’t impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect).
Even if you’re not using AI yourself, however – even if you’re vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that’s very different and vastly more unpredictable than those we’re accustomed to. It’s a reminder of just how much of the industry’s future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it’s far from the only crucially important topic being hashed out in those venues.
The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
Because the industry moves fast while governments move slow, it’s easy to forget that that’s still an active topic for as far as governments are concerned, and hammers may come down at any time.
Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it’s AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.