In game theory, perhaps the most important category of simple games is
something called zero sum games. It’s also one of those mathematical
things that are widely abused by the clueless – you constantly hear
references to the term “zero-sum game” in all sorts of contexts, and they’re
almost always wrong.
A zero-sum game is a game in which the players are competing for resources, and the set of resources is fixed. The fixed resources means that any gain by one player is necessarily offset by a loss by another player. The reason that this is called
zero-sum is because you can take any result of the game, and “add it up” – the losses will always equal the wins, and so the sum of the wins and losses in the result of the game will always be 0.
For example, if you play poker with a group of friends, you’re playing a
zero-sum game. In each hand, what you can win is the money in the pot – and that
money was put there by you and the other players. There’s nothing for you to win
that wasn’t put there by another player. If you win something, it’s because at least one of the other players lost something.
The simplest kind of zero sum game is one where you’ve got two
non-communicating players, a fixed payoff matrix, and the
players move simultaneously.
A payoff matrix is a tool for describing a certain kind of simple game. The idea is that if you’ve got two players, and each player has a fixed set of possible moves, then you can describe the game by a matrix. One dimension of the
matrix is a list of moves for one player; the other is the set of moves for the other player. Each cell contains a payoff: cell x,y contains the result if
player one makes move x, and player 2 makes move y. For clarity, we often label one dimension with numbers and the other with letters, so that player 1’s move is always a letter, and player 2 is a number. For example, here’s
a simple payoff matrix:
In this matrix, positive cell values indicate that player 1 pays
player 2; negative cell values indicate that player 2 pays player 1. So, if
player 1 picks move “B”, and player 2 picks move 3, then player 1 pays 10 to
The strategies in this kind of game are pretty simple. Since
the players don’t get to communicate, there’s no way of doing any
kind of deal-making. So each player simply looks at the cost-benefit
tradeoffs. For player 1, if they make move “A”, they could win or lose 10,
or they could lose 20. That’s two bad outcomes, one good one – and the good
outcome is small. If they pick “B”, they’ve got two good outcomes – one small, one large, and one bad outcome. And if they pick “C”, all of the outcomes are
good, but small. So for player 1, C guarantees that they’ll come out
ahead. Other choices have potentially larger payoffs, but with larger risks.
The best outcome comes from finding a way to maximize the potential wins while minimizing the potential losses. In this case, the potential wins aren’t that much larger for moves other than “C”, so player 1 would probably be wisest to pick “C”.
On the other hand, look at it from player 2’s perspective. He can see that player 1 is likely to pick “C”. So assuming that, his best choice would be “3”. But player 1 knows that player 2 would like to minimize his losses by picking
“3”. So player 1 could pick “A”, taking a risk in the hopes of winning 10 instead of 2.
You can see that even for a simple game, this can get complicated. It gets even worse if you consider multiple rounds.
It turns out that there is a relative simple linear-programming based
trick that allows you to define a probabilistic optimal strategy for
a zero-sum game. But before we get to that, we’ll need to define what
we mean by a strategy in mathematical terms – and to get to that, we need to
define what it means to solve a game by finding its equilibria.
This kind of zero-sum game comes up in a lot of real world situations, many of
which aren’t what we intuitively think of as games. For example, an election can
be modelled as a zero-sum game. If you think of the set of people who will vote as
the resources, then each candidate can only gain votes by taking votes away from
This is, of course, an oversimplification, and a typical example of how the
idea of zero-sum games is commonly misused. An election isn’t really a
zero-sum game, because the pool of people who vote isn’t fixed. The current
primary election cycle in the US is a great example of this: one of the
interesting things that’s been going on is that people who haven’t voted before
in primaries are registering and voting. Barack Obama has been the main
beneficiary of this, and he has gotten more votes without taking any votes
away from Hillary Clinton by getting votes from new voters.
The key thing that defines a zero sum game is the fixed pool of resources. If
there’s any possible input that increases the resource pool, then it’s not a
zero-sum game. Elections aren’t zero-sum, because the pool of voters isn’t a
specific fixed quantity: the number of people who vote can increase or decrease – so a candidate can gain votes without taking away from the set of voters voting for another candidate; and a candidate can lose votes without adding to the
set of people voting for another candidate.
We also frequently hear about zero-sum games in the context of economics
and financial markets. There are many things that can be
represented as zero-sum games in economics. But very frequently, when you
hear someone talking about something as a zero-sum game, they’re wrong: the economy as a whole can shrink and grow. In a shrinking economy, wealth
can be lost by many people without being gained by anyone – it can, effectively,
disappear. In a growing economy, money can be made by many people without taking
it away from anyone – wealth can be created. It’s not zero-sum: the pool of
resources (wealth) is not constant.