Solving Tic-Tac-Toe: Game Tree Basics

tic-tac-toe.png

Moving on from simple zero-sum games, there are a bunch of directions in which
we can go. So far, the games we’ve looked at are very restrictive. Beyond the
zero-sum property, they’re built on a set of fundamental properties which ultimately
reduce to the idea that no player ever has an information advantage over any other
player: the complete payoff matrix is known by all players; no player gets
to see the other players strategy before selecting their own; and so on.

Moving on to more interesting games, we can relax those assumptions, and allow information to be hidden. Perhaps each player can see a different part of the
payoff matrix. Perhaps they take turns, so that one player gets to see the others
strategy before selecting his own. Perhaps the game isn’t zero-sum.

Non-zero sum games turn out to be disappointing from a game theory point of
view. Given a suitable set of restrictions, you can convert a non-zero-sum game to a
zero-sum game with an additional player. In the cases where you can’t do that,
you’re pretty much stuck – the mathematical tools that work well for analyzing
zero-sum games often simply don’t work once you relax the zero-sum requirement.

The more interesting ways of exploring different realms of games comes when you
allow things to get more complex. This comes about when you allow a players strategy
selection to alter the game. This general takes place in a turn-taking
game, where each players strategy selection alters the game for the other player. A
simple example of this is the game of tic-tac-toe. The set of strategies of the game
for a given player at any point in time is the set of open squares on the board.
Each time a player makes a move, the game is altered for the other player.

This makes things much more interesting. The easiest way to think of it is
that now, instead of a simple matrix for the game, we end up with a tree. Each move
that a player can make creates a new game for the other player. By making each game position a tree node, and adding children nodes for each position that can follow it, you can build a tree describing the complete set of possible game positions, and thus the complete set of ways that the game could play out.

As usual, it’s easiest to see this with an example. At the top of this post is the game tree for tic-tac-toe, assuming that “X” goes first.

There are really three moves that X can make initially – she can play a corner,
a center-edge, or the center. Which corner or edge she chooses is irrelevant – on an empty tic-tac-toe game-board, due to symmetries, the corners/edges are equivalent. If X plays the center, then the other player can choose either an edge or a corner. If X plays an edge, the O player can play a corner adjacent to X’s move; a corner not adjacent; an adjacent edge; the opposite edge; or the center. And so on.

How can you solve a game like this?

Ultimately, it comes down to utility functions. You can build up a utility
function describing the values of each possible move. This requires some of what I
talked about in my last post with lotteries: most moves produce not a specific
result (win or lose), but a new game. So to compute the utility value of a
particular strategy, you need to combine the utility values of the potential
subsequent strategies resulting from that move.

Computing the utilities in tic-tac-toe is easy: the utility of any move is the
probability of your winning the game if you make that move. You determine that by
analyzing the possible moves available to you. First, you can prune the tree: any
strategy that doesn’t include a potential path for you to win, you can just
eliminate from consideration: it has a utility of 0. For other moves, you look at
the potential paths: if move A leads into a subtree where 72% of the paths lead to a
leaf in which you win, then the utility of A is 0.72. Each turn, the best
strategy is the one with the highest utility; moves with equal utility are
equivalent.

The problem with this approach is that game trees grow incredibly quickly. Even with all symmetries eliminated, the game tree for tic-tac-toe has approximately 25,000 nodes! That means that for a human, the complete game tree isn’t really understandable, even for a trivial game like tic-tac-toe. When
you start looking at more interesting games, you get some really astonishingly large game trees. Checkers has a game tree with around 5×1020 nodes, after eliminating all symmetries. For Go, the exact size of the tree is unknown – but it’s greater than 10100!

Obviously, we can’t compute the whole game tree for games like that. And without the full tree, we can’t calculate precise utility for every possible move. So we
to compute approximate utilities. We can compute approximate utility
values using a variety of techniques, including heuristics, prunings, board valuations, and partial trees.

Heuristics is the use of what you could call intelligent guesses. There
are moves or positions on most games that are usually sensible. In a game of checkers, if you have an opportunity to either jump and capture one opponents
piece, or double-jump and capture two, it’s generally smarter to capture two. Not
always – there are cases where the single jump (and subsequent sacrifice of one of your pieces) is the best move. But most of the time, the double-capture is
preferable. In the absence of any other information, you’d assign a higher
estimated utility to the double-jump. Heuristics also covers some things
based on your knowledge of the other player. People who’ve played chess with me know that I tend to over-use my queen and under-use my knights; if you use that
knowledge, you can make intelligent guesses that certain paths in the tree
are more likely than others, and use that to weight the likely paths higher
in your utility computation.

Prunings can be thought of as a sort of heuristic. The idea is that
if you look at the game tree, there are some parts of the tree that lead
into undesirable situations – that is, situations where your probability of
winning is reduced. If it appears that a particular subtree of the game
isn’t likely to work out well for you, you just eliminate that entire
subtree. It’s possible that if you searched that subtree, that you might find
a path that would let you win. But the odds of finding a great winning strategy
are small enough that you just cut the subtree away, and don’t search it.

Board valuations are ways of looking at a given game situation, and assigning
it a value. In chess, a board where your opponent has his king in an unguarded position, you’ve got your queen out on the board where it’s not vulnerable and it’s available to move, and your king is in a castled defensive position is better
than the same position where you’re queen was captured, or where your king is undefended. You don’t need to look further into the game tree to make a strong guess which of those positions is preferable. This comes down to assigning a valuation
to a board position without looking further into the tree to compute that
valuation. It’s based solely on immediate, observable properties of the state
of the game when you make the valuation.

Board valuations are particularly important when combined with partial trees. The idea of a partial tree is that the full game tree is impossibly large. So you only consider a small part of it. This is very common in chess-playing programs. The
program computes a partial game tree: for example, it may compute 9 levels of the game tree from the current position, which covers the next 9 moves. The states at the leafs of that partial tree are computed by board valuations; the states at internal nodes are computed by combining the values of their children.

So where do we go from here in our exploration of game theory? There’s more
to be said about game trees. For example, for most two-player alternative move games, there’s a very particular structure that can efficiently represent the game tree, and provide opportunities for analysis and pruning, called an and-or tree. There are also a set of optimizations based on the fact that in a typical game tree, there are multiple ways that a given board position can be reached. We can compact the game tree into a graph by taking advantage of that.

0 thoughts on “Solving Tic-Tac-Toe: Game Tree Basics

  1. Steve

    “Non-zero sum games turn out to be disappointing from a game theory point of view. Given a suitable set of restrictions, you can convert a non-zero-sum game to a zero-sum game with an additional player. In the cases where you can’t do that, you’re pretty much stuck – the mathematical tools that work well for analyzing zero-sum games often simply don’t work once you relax the zero-sum requirement.”
    Economics uses a lot of game theory, and most of it relates to non-zero sum games. You’re right that you must use different rules – like Nash Equilibrium. You also often end up with non-uniqueness. Plus, it’s less clear that the assumptions of Nash Equilibria are generically a good representation of real human behavior. But, a lot of productive work has been done using game theory on non-zero sum games, involving a fair amount of sophisticated mathematical techniques.

    Reply
  2. Leahn Novash

    I’d really like to see an implementation of a tree for tic tac toe. I can’t really figure out how to code one.

    Reply
  3. Markk

    “I’d really like to see an implementation of a tree for tic tac toe. I can’t really figure out how to code one.”
    You won’t see one coded for Tic Tac Toe because you don’t need a full tree to make an optimal player. But naively is just a tree. Each node and leaf has a game position associated with it. The leaves are those positions with a win for a player or a draw.
    So the root node has nine children, each of those have 8 children, each of those have 7, each of those has 6, each of those has 5 and then it gets messy because you would get possible victors. I haven’t done it but less than 100K positions would seem about right.
    Actually implementing this for TTT you could just use any regular tree library thing you have got. In this case you can store the entire game position in one 32 bit integer (that is, the normal on todays PC’s). For more complex games there are all sorts of options for compressing. Depending on how you are using the utility function you can just store deltas (the change from the parent position) instead of the full state, but then you have the problem of telling if two positions are the same. You may not care.
    This is an important area though. When checkers was “solved” recently the implementors seemed to be very proud of their compression method for trees.

    Reply
  4. Star Trader

    A quick side track on board valuation:
    The problem of how to assign a value to a board position is one that is well suited to neural-networks. The current best backgammon playing program does uses this technique with great success.
    One of my interests is in huge state deterministic zero-sum games. I find Go interesting, but truly enjoy games like SimCity (which has some non-determinism but not much) with a huge variation of possible states and emergent behavior.
    I’m very interested in continuing to read your game theory articles.
    StarTrader
    BTW: Do you also favor material over position in chess?

    Reply
  5. GrayGaffer

    tic-tac-toe in particular has received a lot of attention.
    “I’d really like to see an implementation of a tree for tic tac toe. I can’t really figure out how to code one.”
    Not quite a tree, but:
    There is the famous (at least in the UK) ‘Matchbox’ game play system by Donald Michie in 1961 (described in Martin Gardner’s “The Colossal Book of Mathematics”), which is also described and implemented as a computer simulation at http://www.delphiforfun.org/Programs/tic_tac_toc_machine.htm
    I have also implemented a heuristically driven Smalltalk version of a 4x4x4 3D version of the game called ‘Cubits” which I first got as a very nice table-top game in 1975 (source code available as a Digitalk Smalltalk V applet if anybody is interested). As one might expect, in standard play the winner is the first to get a line of 4 in any direction including diagonals. But I and my brother found a much more engrossing and longer-playing variation on the game: play until all empty positions have been filled, and the winner is the one with the most runs of 4.

    Reply
  6. Paul Murray

    “if move A leads into a subtree where 72% of the paths lead to a leaf in which you win, then the utility of A is 0.72.”
    No … if any of the moves available to the next player are a win for him, then that move is a loss for you. If all of the moves available to the next player are a loss for that player, then it’s a win for you. TicTackToe is uninteresting because there’s no percentage utility about it.

    Reply
  7. Paul Murray

    “I’d really like to see an implementation of a tree for tic tac toe. I can’t really figure out how to code one.”
    I did that in Microsoft Access, a few years back when I was an MS Access contractor.
    I also did a tree for that game that Edward DeBono invented. He claimed, of course, that his game was revolutionary and amazing, so I generated the tree and printed it out in such a way that you could follow the printout to get a guaranteed win. Bleh.

    Reply
  8. Paul Murray

    With my complements. Lets see if this pasts in ok!!
    package tmp.ttt;
    import java.util.*;
    enum Result {
    WINN, LOSS, DRAW;
    }
    public class TTT {
    public static void main(String[] av) {
    new TTT().go();
    }
    // keep track of anything altready calculated. Needless to say, we only
    // keep the canonical numbers.
    Map knownResults = new HashMap();
    Map> knownWin = new HashMap>();
    Map> knownLoss = new HashMap>();
    Map> knownDraw = new HashMap>();
    /**
    * I will model the board as a 32 bit integer. The first 16 bits are “my” moves, the second 16 bits are “it’s” move.
    */
    void go() {
    calculate(0); // empty board.
    for (int board : new TreeSet(knownResults.keySet())) {
    System.out.print(toString(board));
    System.out.print(‘ ‘);
    System.out.print(knownResults.get(board));
    System.out.print(“.”);
    if (knownWin.containsKey(board)) {
    System.out.print(” WINNING MOVES:”);
    for (int move : knownWin.get(board)) {
    System.out.print(‘ ‘);
    System.out.print(toString(move));
    }
    System.out.print(“.”);
    }
    if (knownDraw.containsKey(board)) {
    System.out.print(” DRAWING MOVES:”);
    for (int move : knownDraw.get(board)) {
    System.out.print(‘ ‘);
    System.out.print(toString(move));
    }
    System.out.print(“.”);
    }
    if (knownLoss.containsKey(board)) {
    System.out.print(” LOSING MOVES:”);
    for (int move : knownLoss.get(board)) {
    System.out.print(‘ ‘);
    System.out.print(toString(move));
    }
    System.out.print(“.”);
    }
    System.out.println();
    }
    }
    Result calculate(final int board) {
    if (knownResults.containsKey(board))
    return knownResults.get(board);
    // ok! we don’t know if this specified board is a win, loss, or a draw.
    // at this point, the other player has just moved.
    if (((board & 0007) == 0007) || ((board & 0070) == 0070) || ((board & 0700) == 0700)
    || ((board & 0111) == 0111) || ((board & 0222) == 0222) || ((board & 0444) == 0444)
    || ((board & 0124) == 0124) || ((board & 0421) == 0421)) {
    knownResults.put(board, Result.LOSS);
    return Result.LOSS;
    }
    boolean allLoss = true;
    boolean anyWin = false;
    boolean nomoves = true;
    for (int move = 0; move ());
    knownLoss.get(a).add(toCanonical(b));
    }
    void noteDrawingMove(int a, int b) {
    if (!knownDraw.containsKey(a))
    knownDraw.put(a, new HashSet());
    knownDraw.get(a).add(toCanonical(b));
    }
    void noteWinningMove(int a, int b) {
    if (!knownWin.containsKey(a))
    knownWin.put(a, new HashSet());
    knownWin.get(a).add(toCanonical(b));
    }
    /**
    * each possible board has a “canonical” orientation. This zaps rotations and reflections
    */
    int toCanonical(int board) {
    int lowestform = board;
    int currentform = board;
    for (int i = 0; i >> 0) & 0x00010001) >> 1) & 0x00010001) >> 2) & 0x00010001) >> 3) & 0x00010001) >> 4) & 0x00010001) >> 5) & 0x00010001) >> 6) & 0x00010001) >> 7) & 0x00010001) >> 8) & 0x00010001) >> 0) & 0x00010001) >> 1) & 0x00010001) >> 2) & 0x00010001) >> 3) & 0x00010001) >> 4) & 0x00010001) >> 5) & 0x00010001) >> 6) & 0x00010001) >> 7) & 0x00010001) >> 8) & 0x00010001) >> 16) | (board >>= 1;
    }
    return sb.toString();
    }
    }

    Reply
  9. Nick Johnson

    Even with all symmetries eliminated, the game tree for tic-tac-toe has approximately 25,000 nodes!

    Are you sure about that? We can get an upper bound for the number of boards by assuming any of the 9 squares can be either X, O, or empty, which gives us 3^9 = 19683. Obviously, many of those aren’t real boards, and it’s not accounting for symmetry, so the real number should be a lot less.

    Reply
  10. MarkW

    Nick Johnson: I guess that any board that can be reached by multiple routes counts as more than one node. For example this board:
    XO.
    .X.
    ..O
    Can be reached in 4 different ways (admittedly, some of them not realistic in gameplay)

    Reply
  11. MarkW

    Eek. That board didn’t come out too well. I hope you can see what I mean though. Like this:
    XO_
    _X_
    __O

    Reply
  12. Eric Lund

    Nick: MarkW is correct that the order of play, not the position, is what matters. Thus the correct simplest upper bound for the number of trees would be 9!, or 362880. As has been pointed out above, that tally neglects symmetries as well as the fact that certain positions with 5, 6, 7, or 8 squares filled are a win for one player or the other. We can refine that number further based on obvious symmetries: since there are really only 12 distinct positions (2+5+5) after two moves rather than 72 (9*8), that reduces the upper bound to 60480. Again, this does not take into account symmetries of later moves (e.g., if one of the first two moves filled the center square there are only 4 distinct moves, not 7, at move 3) or the fact that some positions produce a win with fewer than 9 squares filled.

    Reply
  13. Mark C. Chu-Carroll

    Nick (#10):
    As other folks have pointed out, if you’re doing a game tree, you’re counting paths, not positions. The best way of thinking of it is that while the nodes in the tree are drawn as board positions, each node really contains the sequence of moves that led to that board position. So, for example, a board position like:

    X – O
    O X –
    – – –

    That could be reached by the move sequence [x@(1,1), o@(1,3),
    x@(2,2), o@(2,1)], or [x@(1,1), o(2,1), x@(2,2), o@(1,3)]. The tree nodes would actually be labelled by those move sequences. But for drawing the tree, since you cana see the path leading to a node, it’s easier to draw the board positions on the node.
    If you were really building a representation of a game tree, you wouldn’t implement it that way – but no one would really implement tic-tac-toe as a game tree. If you study the TTT game tree and its utility function, you can collapse it down to virtually nothing. The full tree is interesting as an understandable abstraction, not as a practical implementation.

    Reply
  14. Nick Johnson

    As a result of half an hour of wasted time, I can now tell you there are exactly 849 valid Tic Tac Toe boards (accounting for symmetry). I also have a GraphViz generated graph of the game tree – unfortunately, it’s nearly totally unreadable. 🙂

    Reply
  15. Nick Johnson

    Mark: Why would you store the history of a position as part of the node in any game? In chess, previous moves can affect the game somewhat (pawn moves for en passant, castling rules, and repetition), but that’s more compactly represented by just storing the variables that affect it as part of the node, rather than the entire history.
    When writing AIs for games such as chess, though, you can see a huge improvement in performance by caching previously seen game states and their scores. Making the entire set of previous states part of that would eliminate the benefit – and I don’t see what you’d gain.

    Reply
  16. Mark C. Chu-Carroll

    Nick:
    I’m not talking about practical implementations. I wouldn’t implement TTT as a game-tree at all. And if I were implementing it as anything like a game tree, I would do it very differently. This is an expository example of a complete theoretical description of a game. In a complete description, the sequence of moves that got you to a given position is relevant – for some games, it’s absolutely critical information.
    For a good model of the game – not a model for a game-player trying to win, but a model to understand the game – you want every bit of information.
    For example, suppose I want to look at a game and judge the quality of a player.

    X O X
    – X O
    O – –

    Here’s two different scenarios for how to get to this position:
    (1) [x@(2,2), y@(1,2), x@(1,3), y@(3,1), x@(1,1), y@(2,3)]
    (2) [x@(3,1), y@(2,3), x@(1,1), y@(1,2), x@(2,2), y(3,1) ]
    Y is playing better in scenario 2 than in scenario one, despite the fact that he’s winding up in a losing position in both cases – Y’s last move in scenario one is stupid. Y pretty much lost in scenario 2 by making a less-than-brilliant first move, but he never did anything as dumb as completely ignoring a winning threat from player 1.
    Again, I’m not talking about how to implement a game – but how to describe it. And a description that tells you how you got to a position is more informative than a description that doesn’t.

    Reply
  17. Nick Johnson

    Ah. Fair enough, I see your point now.
    I’ve only approached this (games in general, not tic-tac-toe in particular) from the perspective of writing AIs or ‘solving’, rather than a more game-theoretic point of view.

    Reply
  18. TimothyAWiseman

    Any recommendations for a more thorough introduction to game theory? My interest is piqued but I don’t really know the best place to go from here.

    Reply
  19. Mark C. Chu-Carroll

    Tim (#20):
    The Compleat Strategist is nice. It’s a bit shallow at times, but it’s written in an engaging, easy to follow style. It’s a good introduction.
    For some more interesting stuff which is a bit more distant from pure game theory, but absolutely amazing, I strongly recommend
    On Numbers and Games by John Conway. Conway was devising a structure for doing game-theoretic analysis of the end-game in Go, and ended up devising the Surreal numbers, which are absolutely fascinating. In the book, he shows how to construct the surreals, takes a side-track into Nimbers (which are fundamental for the analysis of many kinds of games), and then gets into full-blown description of his game concept. It’s one of my favorite math texts ever. It’s written in a very careful, thorough manner, but still manages to be engaging and amusing.

    Reply
  20. Paul Murray

    My apologies for posting that code. I also am working on a graphviz tree, and it’s also unreadable at this point. When I get home, I intend to prune the tree to only those positions that you can get if neither side makes a losing move. With luck, it will be usable. This time – I’ll put it up on the web somewhere and leave a url.

    Reply

Leave a Reply to Markk Cancel reply