Playing StarCraft? You can now compete with an undercover AI

Players can opt-in to face AlphaStar, the strategy AI from Google’s DeepMind

This spring, an AI played Starcraft, a two-person real time strategy game, against highly ranked professional video game players in a dramatic exhibition match — and won 10 out of 11 games. It was an impressive demonstration of how far AI has come.

Now, the StarCraft AI, created by Google’s DeepMind, the leading AI lab, is playing regular human players online. Players have to opt in — and can opt out any time — but if you opt in, some of your competitors just might be an AI. (The AI won’t announce itself as an AI, because, a DeepMind spokesperson told me, “We’re playing these games anonymously to create controlled testing conditions.”)

If Starcraft-playing AI can already beat top professionals, what do we learn from having it play regular humans? This version is under significantly more restrictions, giving the rest of us a fighting chance.

Not only is the AI playing against humans under new constraints, it has learned to do new things. That might have introduced weaknesses that even non-pro players can exploit — or maybe it’ll crush all its opponents. Either way, it’s the latest moment in the fast, fascinating rise of a world-changing technology.

Competitive war strategy games are the latest AI triumph

A few years ago, no AI could competently play a game like StarCraft. Unlike chess or Go, games where the whole board is visible and the players take one turn at a time, StarCraft is a fast-paced, real-time strategy game.

StarCraft has different game modes, but competitive StarCraft is a two-player game. Each player starts on their own base with some basic resources. They build up their base, send out scouts, and — when they’re ready — send out armies to attack the enemy base. The winner is whoever destroys all of the enemy buildings first (though it’s typical to concede the game once the outcome is obvious).

In January, I watched 11 matches between DeepMind’s AI, AlphaStar, and pro players. AlphaStar showed good instincts for when to attack and when to retreat from a fight, and had the ability to attack on multiple fronts, press an advantage, and plan ahead.

But while AlphaStar was a clear step forward, there were some obvious limitations on display. Sometimes, the AI made mistakes that were obvious even to a human. It didn’t handle harassment of its units well, sending its whole army parading off after minor distractions. It got more information than humans — it didn’t have to use the camera like human players do — and when it was forced to use the camera, it performed notably worse.

And critics reviewing the event pointed out that, while the AI was supposed to be throttled so it didn’t have a huge advantage over humans in reaction time and actions-per-minute (APM), it did use lots more actions-per-minute than humans during critical periods of combat, which often gave it a decisive advantage.

DeepMind, in consultation with pros, changed that for the version that’s now playing against humans. “These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players,” the announcement says.

The original AlphaStar played only one of Starcraft’s three races — Protoss — and only played against competitors who were also playing Protoss. Now, AlphaStar can do all three races and can play against humans playing as any of the three. Each race has different abilities, options, and strategic implications, making for a very different game (pro players specialize in just one).

It remains to be seen whether the added limitations are sufficient to give humans a fighting chance against AlphaStar.

The last decade has transformed AI — and now AI is transforming the world

Modern advances in artificial intelligence are powered by a technique called deep learning. The same basic technique is used to train AIs to play strategy games, write stories and poetry, generate images, and translate text.

Ten years ago, no one had successfully done much of note with deep learning. That’s because it requires lots and lots of computation to be successful, and our computers simply weren’t powerful enough to get good results. As computers got more powerful, it became obvious that deep learning techniques had wide-ranging applications, and generalized well. More than that, whatever deep learning could do, it did better with more computing time and more data. That suggests we can keep producing improved versions for a while.

The fast growth of the field of AI has brought us really cool things, from new forms of art and poetry to new insights into protein folding and medicine. It has also introduced new challenges — from convincing fake images and videos to easy tools for generating fake Amazon product reviews to concerns about the power of facial recognition and surveillance for propping up authoritarian states.

And experts warn that as AI systems get more powerful, new complications will join the host of existing ones: we’ll have to figure out how to ensure that we’ve properly specified the goals we want the systems to pursue, and we’ll need to patch vulnerabilities that — thanks to the nature of deep learning systems — are hard to identify and understand.

It’s incredible to watch an AI defeat humans at war games, but it also might give us a moment of pause. At least opting out is as easy as pressing a button — for now.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good