This article is part of AI Week.
Niki Head is people and development director for Stellar Entertainment Software in Guildford in the UK, which was founded by ex-Criterion staff and is currently working on an unnamed AAA arcade racing game.
Head discussed how AI is reshaping work culture at last year’s GamesIndustry.biz HR Summit, and here, she sheds more light on exactly what impacts AI is having on her role, from an improved ability to dissect employee reviews to receiving ChatGPT-authored complaints stuffed with legalese. Perhaps most importantly, she discusses the role of HR in listening to employees’ concerns about AI, particularly their fears of being replaced – and how several aspects of HR itself have already been taken over by AI.
What is the biggest issue you find with AI in your job?
I think the biggest one that we’re seeing the increase of is using ChatGPT to support writing certain things out. I’ve not had it in our studio, but I mentor somebody who’s seen an increase in employees using it to write things like grievances or complaints. And you can usually tell, because the formatting is all off, but it also quotes a lot of out-of-date legislation, or legislation that’s not relevant.
The worst one I heard was somebody had written a grievance that was about 30 pages long, and it was because it had been padded out by ChatGPT. So it took a really long time to unpick what the issue actually was. When you took out all the filler and fluff, it was actually a very short complaint, but it took the HR department ages to try and figure it out.
Is that making your job more difficult?
It does, because people are using ChatGPT as gospel, so when they’re submitting it, they’re submitting it thinking that this is a legal document that has legal definitions or employment law references. But actually the employment law references that they’re quoting aren’t relevant to what they’re complaining about.
Are there any positives you’ve seen from AI adoption?
A lot of efficiencies. Obviously, AI can read data faster than you can, it can process massive amounts of data. We’re about to go into our annual reviews, and we use HiBob as our HR system, and they’ve invested quite heavily in AI tools for HR, which has been quite handy.
What kind of tools?
It will take the answers that have been given, and it will give strengths, weaknesses, and opportunities. It does a sort analysis for you, almost a TL;DR of what you talked about with that employee. So we can take that as an overline view and go, “OK, so the most common theme is this person needs to improve in this, and they’re really, really good at this.”
It doesn’t stop us from reading the whole report, but it does pick out some underlying themes that you could then take and say that person could do with training and development in this area. That’s been really helpful for our managers, especially if they’ve got a lot of people that they do reviews for.
It does have other tools as well, such as a sort of a traffic light system. So it’s trying to predict what your risks are for someone potentially leaving.
How does it work that out? Does it read employees’ emails, for example?
It doesn’t do anything with email, it’s purely going on the data that we’ve put into our HR system. How long have they been in the same job? Do they manage other people? How much time off have they had?
Now, some of it you have to take with a pinch of salt, because time off can be for numerous things. But it will tell me, for example, that if someone’s been in the same role for two to three years, that can potentially come as a risk that they might be stagnant in what they’re doing. But what that doesn’t take into account is whether that person’s happy doing that.
So it’s a signpost, but that’s where you see the limits of AI: it doesn’t know the person, whereas you would know the person. It’s good as an indicator, but I don’t believe it to be the sole proof that somebody’s going to leave or not. It’s much more multifaceted than that.
What about the studio itself? How are employees adjusting to AI tools, and do you get pushback against that?
I think from the general conversations we’ve had around AI, our engineers are quite for it in the fact that they find it efficient: they can understand the logic. Artists are much more emotive, and I understand that, because it is a case where they think that AI is trying to take over, and it’s trying to cut down on opportunities and jobs that they potentially would have.
But definitely, we’ve got a couple of artists who are very anti-AI.
How do you deal with that? If the company wanted to introduce AI tools, how would you approach that conflict?
We first try to understand the issue. Is it a fear thing? Because if it’s fear that [they think they’re being replaced], we reassure them that’s not the case at all. We need artists. We love artists. We celebrate art a lot. But technology is moving, and we have to at least investigate these things. We have to have a look into it, because they might find it useful. Even if it’s just using it and throwing it away, that’s fine, but you can’t bin the tool until you’ve at least tried it.
So we set tasks. So one in particular, we set a task where they could explore it a little bit and give us feedback. They may come back and give you the exact same view, but at least you’ve given them the opportunity to learn it.
If it is needed for their job and it is a tool that we really want to use as a company, then yes, at the end of the day, that’s what we want them to do. But we’ll try and approach it with “how can we get you on board”, rather than dictate you use it.
Do you ever worry that your HR role will be replaced by AI?
I think it’s hard to say that we would be replaced. I think generally there may be companies that decide that they think that AI tools can replace HR. But the thing that sometimes gets missed by larger corporations is the definition of what HR actually is. We get a misrepresentation of being the people that hire and fire the people, that make those decisions, and are all about the company and never about the people. And it’s not true. We are the people in the middle that are trying to make sure that people are acting legally and ethically, but also making sure that the employees are having a really great experience at the same time.
We’re kind of the Switzerland in the middle of both. We’re there to advise, we’re there to give expertise, but we are making recommendations at the end of the day, unless someone’s doing something utterly illegal.
HR has been using AI tools for a while. There’s a really great company that makes AI tools for lower level things, just signposting people to a certain policy and giving managers a quick answer. Those tools have been around for a while, so we haven’t been replaced fully yet. But the other side of it is that HR have an expertise that’s gained over time, and especially around people.
You can’t predict people’s behaviour. AI can give you little pointers on attrition and things like that, but when you are sitting in a room, AI can’t predict that someone’s going to get up and throw a chair at you.
Has that ever happened?
Yeah, I’ve had someone throw a chair at me. I’ve had someone follow me home because they felt I was the person that fired them. I was just in the room, and I had nothing to do with the decision to terminate their employment, but they followed me to a train station.
So you can’t predict people’s behaviour. AI can’t help a manager in a situation when an employee is crying or they have to have empathy, they have to be able to communicate effectively, and HR are those people that can help train and support them in those situations.
I noted that Microsoft AI CEO Mustafa Suleyman recently predicted that most white-collar work will be fully automated by an AI within the next 12 to 18 months…
What that’s saying is they don’t value the input of what a person will bring. And it is a shame, because AI could replace certain functionality, it could support certain functionality, but I’d be really interested to see them do that and then see how effective that really is. If a person wants to raise a complaint and it goes to an AI bot, what happens? Because that bot can’t make a decision, it can’t tell what the person needs. It’s just going to give them an answer based on a bunch of parameters that somebody’s put in.
But having said that, you said earlier that you do have some queries that are answered by AI, so you are kind of on that path already.
I think on the lower level. So say, for example, somebody said, “How do I claim expenses?” The bot is really good for that. It goes, “Here’s the expenses policy.” But if it was something like, “I’ve got an employee who wants to raise a grievance,” there are certain triggers in the bot that are what they call “hard levels”, so it would automatically go to somebody in HR.
If you catch something like that early, you can usually talk about it informally, and you can usually avoid a dispute. If that went fully through a bot, it could get missed, and then it ends up being a bigger thing than it should have been. But also there’s no empathy in that conversation. The bot might go, “Oh, that’s a shame, sorry to hear that,” but it doesn’t mean it. Whereas a person on the other side can go, “I hear what you’re saying, let’s try and help you,” a bot is going to be very cold. And when you are dealing with people issues, they’re very emotive, they’re very personal.
The actual tool, you are coding it from the backend: so the company has bought the product, but you are actually telling it what your rules are. So a larger company might say, “Well, for even higher level things, we’re going to allow the bot to answer,” but then you would have to have a look at your rate of success with that.
You said you’re in the middle between management and employees. In that sense, you might find yourself having to implement policies that you might not necessarily agree with, such as AI policies. Does that happen?
There’s times where you do have to implement stuff you don’t agree with, but our job is to advise on the risk of what could happen and if it’s the right thing for the company. And sometimes you have to be like, “OK, I’ve given my advice, it is up to you whether you do that.” And yeah, there’s been times where I’ve not agreed with something and I’ve put forward an argument for it and we’ve decided not to do it. It’s very rare that I disagree with something that goes out.
What if the top brass turned around and said, “OK, well all the art’s going to be AI generated from now on.” What would you do in that situation?
I mean, I would be absolutely stunned. I don’t think Paul [Ross] would ever say that. But playing devil’s advocate, if they were to do that, my job would be to sit and say, have you thought about the human impact? And long-term, how does that affect you?
If people are being laid off because they believe they’re being replaced by AI, in the long term, the public damage that does… If you’ve got that wrong, how do you then claw that back without losing credibility? Because you’re damaging a brand.
If the collateral is people, it’s hard to take it back.
Given all the positives and negatives we’ve discussed, would you say AI has been a benefit to you in HR, or is it generating more difficulties?
I think it has freed up more time in certain areas to be able to deal with people more, because that is fundamentally our job, and the data side of it is part of that. Larger corporations will have people specifically in data areas, but when you’re a smaller outfit, you are the person singing the theme tune and writing the theme tune.
The time that I get back on just being able to have it give me some insights, that side of it has been great for me, because what could usually take me an entire week to go through a lot of data, I can now condense that down into a couple of days. That time is being invested back into being able to spend it with people and find out how they’re getting on. It just gives us time to do the fun stuff.
So I would say overall from a HR perspective, it’s been really helpful. But yes, with that comes with humans being human, and they’re going to use these tools to make their lives easier. So absolutely they’re going to use it to write things like grievances or complaints. And I would imagine from a legal perspective, lawyers are seeing similar things. Tribunals will probably have to start spotting AI-generated content.
This interview has been edited for length and clarity.