We have an AI problem
And it might not be what you think
We have this one acronym to describe an onion with a million layers. We’ve made it monolithic. We’ve made it into a huge blanket that covers every aspect of technology, and in particular, game dev. Nuance around the subject has been beaten and left for dead. The arguments have turned into a binary ‘all or nothing’ proposal.
We use it to describe everything from Tesla autopilot to auto-correct on your phone. We’ve lost the plot.
Let’s put it in gaming terms. The stuff that helps a grunt in Halo avoid your grenade in 2001? AI. The computer controlled racers you compete against in Wipeout XL on your Playstation? AI. The bots used to fill your Fortnite match because you kinda stink? AI.
And then there’s the stuff we call ‘AI’ that is varying degrees of problematic. Machine generated art and assets, artificial voice over, story content, to name a few.
Clearly, there’s an obvious distinction, right? Yet we’ve not figured out how to differentiate this stuff amidst the volatile mix of proponents and opponents around the term.
Am I here today to propose a new term for the GenAI stuff? Not really. (Hell, you could argue “GenAI” is already a pretty good way of saying it.) I am here to suggest that maybe we use our critical thinking abilities as gamers for better results.
You know, the ones that make us good at games. :)
I believe there are some serious moral and ethical issues around GenAI, in particular how the data is currently farmed, trained, and attributed. At the exact time there is some crazy potential for pushing our favorite medium forward.
Here an example of ways I think we are getting it wrong, and some of this may be controversial. (What about it isn’t?) Anyway…
Example: You are making a first person shooter, built in the Unreal 5 engine. You decide you need some early play-testing within the dev team, but the art guys haven’t even started on character work yet.
What do you do? You pull in a prefab nondescript Unreal “dummy” character. Looks like a robot. Has basic animations. Good enough to shoot at from across the room to test your cover system.
Later in development, that placeholder character will be swapped for a badass looking model with great animation and exquisite texture work by the team.
No one bats an eye.
Now, imagine the exact same scenario, only instead of the prefab placeholder character model, you generate one with a prompt. Same purpose, to be a stand-in for testing.
EVERYONE bats an eye. You are suddenly being shunned by the entire game-o-sphere. You and your team are branded thieves and imposters.
Is there a difference? In both cases you used something that you didn’t build in order to test your game. In both cases it was meant as a temporary measure.
So why the drastically different reaction? Your answers will surely vary, but at the least, I think it’s important to acknowledge the role of emotions and biases in the mix.
When I hear the first scenario I say to myself, “makes sense!” When I hear scenario two, I say to myself, “oh shit, they are gonna catch hell for that one!”
Feels like my own reaction is tainted by broadly applied assumptions that might not be applicable in this case.
“…just maybe, we end up being the canary in the coal mine.”
I guess I’m kinda just saying that—like a lot of things in life—it’s complicated. (And yes, I just purposely used appropriate em dashes, and more than a few of you suddenly wondered if this article was written by an LLM.)
There has to be a better way to distinguish the baby from the bathwater. Knowing the differences between “Game AI” (pathfinding, procedural generation, enemy behavior routines, etc.) vs “GenAI” (prompt derived art, dialogue, terrain, vocals, etc.)
It might simply come down to a combination of game devs/publishers stating clearly where GenAI was used in the making of the game—everything from placeholders to what shipped at launch—and players tempering their expectations and definitions of what crosses the line. The problem remains, the line in question is all over the place at the moment, free of consensus.
Admittedly, this article is not attempting to tackle other major issues around the AI boom, in particular the massive cost in energy, manufacturing, painful economic shifts, job security, environmental costs and more. But as things relate to the gaming dev cycle and industry, I think there’s a way to find some light at the end of the tunnel.
And maybe, just maybe, we end up being the canary in the coal mine. Games are often the litmus test for new technologies. The “standards” and “practices” could come from us.
As always, your thoughts and comments are welcome. I’m very interested in what you think!
- Scott



