Mephistopheles and Faust play chess

Mephistopheles and Faust play chess in this 19th-century painting.

Wikimedia Commons

How to Tie with AI

Artificially intelligent programs are playing board games better than humans ever could. Is it possible to design a board game that can stump even the smartest computer?

The first time I thought critically about how board games work, I was a 12 year old standing on the archery range of a summer camp deep in the Green Mountains of Vermont. My bunkmate challenged me to a game of tic-tac-toe while we waited our turn with the bows. “That’s a stupid, easy game,” I said. “Yes,” he answered, “but I bet you can’t even beat me once.” Goaded, I accepted his challenge.

We didn’t have paper or a pencil, so my bunkmate picked up a stick and scratched the familiar hash sign in the dirt. Then he drew an X in the top right corner. I decided to threaten him with an O placed aggressively under his X. He responded with an X in the top left corner. I had to block him from getting three across the top row, so I put an O between his X’s. He grinned, then drew an X in the bottom left corner. The board looked like this:

I was in an impossible position. If I placed an O in the center of the board, my bunkmate would win by drawing an X in the left column. If I blocked the left column, he would win with an X in the center. Convinced that I just had bad luck, I scuffed the board away with my shoe and drew a new one. “Let’s play again, but this time I’ll go first,” I said. I scratched a mark in the middle left space, but a few moves later he won again by presenting two spaces I had to block in the same turn. We played a dozen more times, and he won or tied all of them. My chance to shoot arrows had come and gone, but I didn’t even notice. “How are you beating me so easily? Tell me!” I demanded.

My bunkmate explained that tic-tac-toe is a “solved” game; there are a finite number of combinations of moves that can be played (255,168, to be precise), and at some point someone actually sat down and figured them all out. Once you know how any one game might end, it’s easy to manipulate an inexperienced player into losing. For example, in our first game my bunkmate knew that if he started with a mark in any corner, he would win as long as I didn’t place an O in the center. The flipside is that if both players are equally knowledgeable, every game will end in a tie.

One of the world’s first computer games, designed in 1952, used the same annoying trick my bunkmate pulled on me that summer to win at tic-tac-toe. Challengers entered their moves through a rotary telephone, but it didn’t matter if they went first or second. The program could never lose. (You can try a modern version at half-real.net/tictactoe.) Using what we now call artificial narrow intelligence (ANI) to pick the best move out of all 255,168 possibilities, it frustrated any human player who didn’t know that the game was already solved.

It was depressing to find out that one of the classic games of my childhood wasn’t really a game at all. But at least I still had checkers, right? Well, not exactly.

While doing research for a magazine article on AI, I discovered that checkers was solved back in 2007 by Johnathan Schaeffer, a computer scientist and professor at the University of Alberta. Tic-tac-toe is a simple game, with a board that contains only nine spaces. In contrast, checkers has a board with 64 spaces and 24 pieces, leading to a possible 1020 board positions. It’s no wonder it took Schaeffer nearly two decades and 200 computers working simultaneously to analyze every potential move through brute force. When his number crunching was finished, he dumped his calculations into a program called Chinook, which could play checkers like my camp bunkmate could play tic-tac-toe. It always wins or ties, no matter what. Although checkers is solved, people still play it competitively because even the best human players can’t memorize the quintillions of positions in Chinook’s database.

Despite chess’s longstanding popularity among computer programmers, the board game hasn’t yet been cracked. Although the game is played on the same size board as checkers and has fewer pieces, the number of possible positions are nearly infinite thanks to the fact that each piece has its own unique movement. Knights jump over and around other pieces, while bishops slide across the board diagonally. Despite this added complexity, computer programmers have designed very powerful chess-playing machines that are imbued with the strategy of grandmasters. But the processing power to completely solve the game just doesn’t exist; whereas Schaeffer needed 200 computers to solve checkers, there aren’t enough computers in the world to solve chess.

From late 1980s and through the 1990s, chess computers improved at an exponential rate. In May 1997 an IBM computer named Deep Blue dealt the final blow to human dominance in chess by defeating the reigning world champion, Garry Kasparov. Unlike Chinook, Deep Blue was never guaranteed to win. Kasparov made a few very human errors that led to his loss. Still, many gamers interpreted the loss as the herald of a new era where computers would always be better than people, whether the game they were playing was solved or not.

In response to Kasparov’s defeat, the computer scientist Omar Syed set out to design a game that would be naturally difficult for a computer to play, but relatively easy for a human to learn. In 2002 he created Arimaa and issued a challenge to computer programmers everywhere: the first person to devise a program that could defeat the world’s best Arimaa players would win $10,000. The game itself is similar to chess; it is played on a 64-space board, but it has many more potential positions. At the beginning of the game, players set up their pieces in any one of 64,864,800 combinations, and have an additional 17,000 possible moves each turn. The number of possible positions combined with the constantly changing strategy make Arimaa very hard for computers to play.

For the first decade after Syed introduced the Arimaa challenge, very little progress was made. Every year humans trounced computer programs named Clueless or Marwin. But things changed in 2015 when Harvard undergrad David Wu introduced a program named Sharp. That year Sharp defeated every human challenger it faced, sweeping up the $10,000 prize. How did Sharp so easily figure out a board game that was designed to confound it?

It turned out that there wasn’t any one breakthrough that led Sharp to victory. Nor did Wu solve the game, which would have been even more difficult than solving chess. He published a paper explaining how Sharp used a combination of Wu’s own understanding of common Arimaa moves combined with an advanced search function that could look up and identify the strongest play. Wu also admitted that he got a little lucky in the games his algorithm played, with his human opponents falling into the same kind of tic-tac-toe traps I encountered as a kid.

When I returned home from camp that summer, I challenged everyone I knew to a game of tic-tac-toe, praying they wouldn’t place their piece in the center so I could blow them away with my newfound skill. I was the game master, in control of everything that happened in the tiny world of X’s and O’s. I think that’s why people play games in the first place and why programmers strive to design AIs that expose the mysteries of complicated, enclosed systems. It’s fun to be in charge.

Jacob Roberts

is a freelance writer in Portland, Oregon, and former staff writer for Distillations.