- What's Next?
- Posts
- This Is Not a Game: Microsoft's "AI Version" of Quake 2
This Is Not a Game: Microsoft's "AI Version" of Quake 2
What looks like a game and kind of moves like a game? An AI model trained to mimic videos of people playing games, of course.
Sorry for the late newsletter this week, I’ve been a bit under the weather for the stupidest possible reason. I won’t get into it here, but I discussed it under separate cover. All I can say, is in this season of extreme tree pollen, please be careful when you’re sneezing.

This is what Quake 2 actually looks like. More than 25 years after release, it will run on every computer you own.
I was planning on writing about the importance of default settings this week, and I’ll come back to it in the not-too-distant-future, but I got sidetracked by this Copilot AI-generated Quake 2 demo, and I can’t stop thinking about what an enormous disconnect this whole thing represents.
And once again, Microsoft is desperately looking for a use case that people actually engage with to justify their multi-billion dollar investment in AI. And once again, they’re not finding it.
What’s the demo?
First, you should probably go try the demo, then I’ll tell you what’s happening here. (No, I don’t have any info about the cost of running this thing, but it’s safe to assume that it’s the rough equivalent of chucking a few cute woodland animals into a woodchipper every couple of minutes .☹️)
What’s going on here? First off, the thing you just tried out isn’t really a game. It looks like a game and sounds like, well, it didn’t make sounds for me. At best, it’s an interactive video, almost devoid of game. Sure, it looks like a game, maybe even one that you’re familiar with. But it lacks the complicated interconnected systems that make play possible and without those systems, players have very little in the way of agency or direction.
For example, look at the world geometry. The space is aggressively non-Euclidean. What does that mean? Well, the structure of the world you’re experiencing has trouble maintaining coherency, even when you’re just moving forwards and backwards down a hallway. Walk forwards down a hallway, and there’s a window on your right. Back up and the window is gone. Walk forward, and it’s there again. Walk up to a door, open it, and see one area. Let the door close and reopen and a completely different area is on the other side. Worst, when you walk into an area that’s dark enough to black out the screen, like pretty much any corner in Quake 2, you’re not going to exit that darkness in the same spot, whether you rotate 180 degrees or just back away. Every texture or dark area is a potential portal to any visually similar area of the same level.
The non-Euclidean geometry isn’t a problem on its own. There are loads of great games that utilize non-Euclidean geometry intentionally. While those games take place in worlds that don’t follow the rules of 3D space that our brains are accustomed to, the rest of these games have well-defined structure and direction, which helps the player avoid getting lost and frustrated. (Check out Antichamber, The Stanley Parable, and Superliminal if you want to see games that use non-Euclidean spaces for novel gameplay.)
And that’s the crux of the problem. I generally find it impossible to define what is (or isn’t a game), but one of the few consistent features of every game I’ve ever played, from Go Fish to Caves of Qud to Super Mario Bros, is that there are systemic rules that players can understand and predict outcomes from. And, as Austin Walker said on Bluesky yesterday, exploring the edge cases in those interactions and outcomes is a large part of what makes games fun. Games combine simple rules to create emergent or unexpected behavior constantly. It’s fun!
As an example, in Quake, there are several straightforward rules that govern damage done by weapons.
Taking damage imparts momentum on enemies, which is used to show a stagger effect when you shoot an enemy.
The momentum imparted on an enemy scales with the damage applied. More damage equals more momentum.
Damage and momentum are additive and can come from multiple sources simultaneously.
The same damage rules that apply to enemies also apply to players because Quake 2 is also a multiplayer game.
Some weapons, like the rocket launcher, do splash damage in an area of effect around their impact spot.
Players created an intersection between all these rules by simultaneously jumping and firing a high-powered weapon (like the rocket launcher) at their feet. You impart the momentum from the rocket explosion and the jump and you go much higher in the air than if you just jump alone, if you survive the explosion, at least. This is the origin of the rocket jump. (As a bonus, if you time it perfectly and stand on an exploding grenade while you rocket jump, you’ll go even higher. If you survive.)
Needless to say, there’s no rocket jumping in Microsoft’s AI version of Quake 2. Not only is there no rocket launcher, but there’s no gameplay framework to allow that emergent behavior to happen.
Their AI model is just generating images that it knows are related to each other very quickly and sequentially based on input from the player. It was trained on video of players playing the same Quake 2 level over and over again. There’s no underlying logic, it just knows that when the arrangement of pixels that you recognize as a health pickup disappears under the bottom edge of the screen, a certain UI element changes. When an arrangement of pixels that you recognize as an enemy’s bullet traverses the screen in a specific way, the screen flashes and the player’s health number on the UI goes down.
In the MS demo, there are occasionally situations that predictably mimic real-life game rules. For example, almost every time I walked down a long hallway with a left turn at the end, the model drew a monster at the at exactly the same spot, jumping out at me in exactly the same way, presumably because that happened every time players in the training dataset traversed a similar hallway. It happens even if I walk back and forth down the same hallway. While this is predictable behavior, it also breaks the rules that players expect from this kind of game around enemy spawning and behavior.
What does the training dataset look like for a generative model that can do this? Microsoft didn’t release specific training data on this model, but in an earlier paper, they stated that it took 7 person-years of gameplay to generate a model using an unreleased game from Ninja Theory. To give some reference, Quake 2 was built in about a year by 10 or so people.
After playing this a few times, I really didn’t understand why Microsoft would release such an obviously flawed demo. The game constantly breaks, there’s no player progression, and the same geometry pops up again and again. And then I went out and read what people were writing about it. The tech press was neutral to cautiously positive, and the games press had the mostly the same negative response that they did to AI-generated Minecraft demo from last year.
How does this happen?
The bit where I talk about my old insecurities
To figure this out, I had to interrogate my own experience as a tech journalist. I spent the first decade of my career reviewing and testing gaming hardware. I never really talked about it, but I always felt like a second-class citizen because I focused on the hardware and software that normal people use on home PCs, instead of the kind of computers that Fortune 500 companies buy. Almost every PC hardware and software event I’d attend was focused on businesses first and games sixth, if at all.
I learned about in the beginning computers because the PC was the place to play the most incredible games people were making in the mid 90s. Wolfenstein and X-Wing were only on PC, and strategy games like X-Com didn’t exist anywhere else. I learned how to make computers work because it was the only way I could get games to run on my 25MHz computer with 4MB or RAM and a 25MB hard drive. When I started working at Ars Technica, during the early days of 3D acceleration and the explosion of PC gaming that came with DirectX and Windows 95/98, it just made sense to me to dig into gaming hardware.
The trick is that games have always been at the bleeding edge of computing. And if you really wanted to learn what you could do with computers—if you wanted a glimpse of the future of computing, games held it.
When I started covering Microsoft and Windows for MaximumPC, it became clear that most of the folks who worked in that space, in the press and at Microsoft itself, looked at gaming as an interesting novelty that didn’t ever really meet the bar for serious consideration. After all, Fortune 500 companies weren’t buying 3D accelerators for their fleet PCs and before Steam grew into a sales juggernaut, PC game sales were tiny compared to consoles. Hell, even in meetings with MS executives talking about the pre-release prospects for Windows XP, they downplayed the importance of the home gaming market as a niche of a niche.
Fast forward 20 years. The PC gaming market is enormous, the AI models that are driving billions of dollars of investment in datacenters are running on tech originally designed for playing games, and the gaming consoles are essentially PCs with lightly-customized processors and vendor-specific operating systems. Gaming has eaten the tech world.
And the tech press and the folks at Microsoft and OpenAI making this AI software are still woefully under-equipped to understand games. I’m sure they know a ton about the business of games, but have no idea why people actually play games. They don’t get why someone would choose to play Control instead of watching a baseball game or a reality TV show. Or why my daughter is just as captivated with Super Mario Bros as I was when I was her age. They don’t get why games are fun.
At a quick glance, this Quake 2 AI demo looks like a game. Someone who doesn’t play games could look at it and think “that’s a game”. When I look at it, I see something that looks game-ish but doesn’t behave like a game. No one is going to spend more than a few minutes checking this out as a novelty.
And yet, Microsoft is spending billions of dollars on this nonsense. The people driving these product decisions seem to have all of the unearned confidence of a middle-aged MBA who watched a few YouTube videos about making video games and never heard of the Dunning-Kruger Effect.
(FWIW, there seem to be quite a few people at Microsoft and other companies who precisely understand this problem. They send me messages on Signal every time I write a column about this. Feel free to share your side of the story if you work at companies using AI for stupid stuff and want to let me know how it’s going. I’m happy to keep your identity secret.)
The TLDR is that this AI-generated Quake 2 thing is a gimmick. It’s interactive video, and yeah, I guess that’s impressive, but it doesn’t capture the magic of interlocking systems that still makes playing Quake 2 fun in 2025, despite the fact that it’s almost 30 years old and dates to the earliest days of 3D graphics.
What’s Next?
This week’s actionable takeaways are a little more ephemeral than normal. Interrogate the folks delivering your news and talking about AI in the press. If they’re promising stuff that sounds too good to be true, consider that it probably is too good to be true.
In the case of this Quake 2 AI demo, I don’t understand why anyone at Microsoft thought it was a good idea to release it as a playable demo. While it’s possible to get a carefully controlled 90 second video out of it that almost looks like a real video game, the moment you actually control it yourself, its flaws are obvious.
One of the stated use cases for this model, in the Microsoft Research paper that was published in Nature, is to allow developers to build prototypes faster and more efficiently than traditional game dev makes possible. This sounds really good, right?
If you don’t know, most publishers want to see some sort of working prototype as well as a fully-fleshed out example of the eventual art direction before they’ll write a check for any games today. Combined, these typically represent many person-months of work for some subset of your team and as a result they can cost tens or hundreds of thousands of dollars to create.
As someone who has pitched a bunch of games to publishers, content partners, and venture capitalists over the last decade, I’m enough of an expert to say that this dog won’t hunt. If I treated this demo as a serious project and pitched it to anyone I’ve sold projects to in the past, they’d stop taking my calls after seeing this. It’s laughable.
This week’s shoutout goes to this fantastic post, which lists free, open, or at least pay-once options for key Adobe Creative Cloud software. So if you want to get clear of annual subscriptions with questionable AI policies, you probably have more options than you knew about. I’ve been using Affinity Photo and Designer for years now, and like them quite a bit, and I have a handful of new audio editors to test out after checking out this list.
Thanks for supporting the newsletter! I love getting feedback from subscribers, so please don’t hesitate to drop me an email, leave a comment, or send me a message on Bluesky or Signal.
Thanks for reading this far! If you enjoy the newsletter, if you sign up here, I’ll deliver one a week to your inbox. As always, What’s Next is reader-supported, so if you enjoy my work and think I should be paid for it, I’d really appreciate it if you chuck me a few bucks here.
Reply