«Contents. 1 Nash Equilibrium in Extensive Form Games 2 1.1 Selten’s Game......... 1.2 The Little Horsey....... 1.3 Giving Gifts.. ...»
the other only if the winner can also beat the loser in a majority vote with sincere voting or there is a third alternative that can defeat the loser and itself be defeated by the winner in a majority vote with sincere voting (this is due to Shepsle and Weingast). In our situation y can beat z on its own with sincere voting, and y can beat x through z because z can defeat x in sincere voting and y in turn defeats z. Hence, there is an agenda that ensures y is reachable.11 Agenda-setting give player 2 the ability to impose her most preferred outcome and there is nothing (in this instance) that the others can do. For instance, player 1 and player 3 cannot collude to defeat her obvious intent. To see this, suppose player 3 proposed a deal to player 1: if player 1 would vote sincerely for x in the ﬁrst round, then player 3 would reward him by voting for x on the 2nd round. Since x will then beat y in the ﬁrst round, player 3’s insincere vote in the second round would ensure that x will defeat z as well. This would beneﬁt both players: player 1 would get his most preferred outcome and player 3 would avoid the worst outcome y and get his second-best. Unfortunately (for player 3), he cannot make a credible promise to cast an insincere vote. If x defeats y in the ﬁrst round, then player 3 can get his most preferred outcome by voting sincerely in (x, z) in the second round. Therefore, he would renege on his pledge, so player 1 has no incentive to believe him.
4.3.2 The Ultimatum Game
In this game, player 1 has a continuum of action available at the initial node, while player 2 has only two actions. (The continuum of actions ranging from oﬀering 0 to oﬀering the entire pie is represented by the dotted curve connecting the two extremes.) When player 1 makes some oﬀer, player 2 can only accept or reject it. There is an inﬁnite number of subgames following a history of length 1 (i.e. following a proposal by player 1). Each history is uniquely identiﬁed by the proposal, x. In all subgames with x π, player 2’s optimal action is to In our game, each of the three outcome is possible with an appropriate agenda. Sophisticated voting does not reduce the chaos.
accept because doing so yields a strictly positive payoﬀ which is higher than 0, which is what she would get by rejecting. In the subgame following the history x = π, however, player 2 is indiﬀerent between accepting and rejecting. So in a subgame perfect equilibrium player 2’s strategy either accepts all oﬀers (including x = π ) or accepts all oﬀers x π and rejects x = π.
Given these strategies, consider player 1’s optimal strategy. We have to ﬁnd player 1’s optimal oﬀer for every SPE strategy of player 2. If player 2 accepts all oﬀers, then player 1’s optimal oﬀer is x = π because this yields the highest payoﬀ. If player 2 rejects x = π but accepts all other oﬀers, there is no optimal oﬀer for player 1! To see this, suppose player 1 oﬀered some x π, which player 2 accepts. But because player 2 accepts all x π, player 1 can improve his payoﬀ by oﬀering some x such that x x π, which player 2 will also accept but which yields player 1 a strictly better payoﬀ.
Therefore, the ultimatum game has a unique subgame perfect equilibrium, in which player 1 oﬀers x = π and player 2 accepts all oﬀers. The outcome is that player 1 gets to keep the entire pie, while player 2’s payoﬀ is zero.
This one-sided result comes for two reasons. First, player 2 is not allowed to make any counteroﬀers. If we relaxed this assumption, the SPE will be diﬀerent. (In fact, in the next section we shall analyze a very general bargaining model.) Second, the reason player 1 does not have an optimal proposal when player 2 accepts all oﬀers has to do with him being able to always do a little better by oﬀering to keep slightly more. Because the pie is perfectly divisible, there is nothing to pin the oﬀers. However, making the pie discrete (e.g. by slicing it into n equal pieces and then bargaining over the number of pieces each player gets to keep) will change this as well. (You will do this in your homework.)
4.3.3 The Holdup Game
We now analyze a three-stage game. Before playing the Ultimatum Game from the previous section, player 2 can determine the size of the pie by exerting a small eﬀort, eS 0 resulting in a small pie of size πS, or a large eﬀort, eL eS, resulting in a larger pie of size πL πS.
Since player 2 hates exerting any eﬀorts, her payoﬀ from obtaining a share of size x is x − e, where e is the amount of eﬀort expended. The extensive form of this game is presented in Fig. 19 (p. 32).
We have already analyzed the Ultimatum Game, so each subgame that follows player 2’s eﬀort has a unique SPE where player 1 proposes x = π and player 2 accepts all oﬀers (note that the diﬀerence between this version and the one we saw above is that player 2 gets a strictly negative payoﬀ if she rejects an oﬀer instead of 0). So, in the subgame following eS, player 1 oﬀers πS and in the subgame following eL he oﬀers πL. In both cases player 2 accepts these proposals, resulting in payoﬀs of −eS and −eL respectively. Given these SPE strategies, player 2’s optimal action at the initial node is to expend little eﬀort, or eS because doing so yields a strictly better payoﬀ.
We conclude that the SPE of the Holdup Game is as follows. Player 1’s strategy is (πS, πL ) and player 2’s strategy is (eS, Y, Y ), where Y means “accept all oﬀers.” The outcome of the game is that player 2 invests little eﬀort, eS, and player 1 obtains the entire small pie πS.
Note that this equilibrium does not depend on the values of sS, eL, πS, πL as long as eS eL.
Even if πL is much larger than πS and eL is only slightly higher than eS, player 2 would still exert little eﬀort in SPE although it would be better for both players if player 2 exerted eL (remember, only slightly larger than eS ) and obtained a slice of the larger pie. The problem is that player 1 cannot credibly promise to give that slice tho player 2. Once player 2 expends the eﬀort, she can be “held up” for the entire pie by player 1.
This result holds for similar games where the bargaining procedure yields a more equitable distribution. If player 2 must expend more eﬀort to generate a larger pie and if the procedure is such that some of this surplus pie goes to the other player, then for some values of player 2’s cost of exerting this eﬀort, she would strictly prefer to exert little eﬀort. Although there are many outcomes where both players would be strictly better oﬀ if player 2 exerted more eﬀort, these cannot be sustained in equilibrium because of player 1’s incentives. In the example above, player 1 would have liked to be able to commit credibly to oﬀering some of the extra pie to induce player 2 to exert the larger eﬀort. Just like the problem with noncredible threats, the problem of non-credible promises means that this cannot happen in subgame perfect equilibrium.
4.3.4 A Two-Stage Game with Several Static Equilibria
Consider the multi-stage game corresponding to two repetitions of the symmetric normal form game depicted in Fig. 20 (p. 33). In the ﬁrst stage of the game, the two players simultaneously choose among their actions, observe the outcome, and then in the second stage play the static game again. The payoﬀs are simply the discounted average from the payoﬀs in each stage. That is, let pi represent player i’s payoﬀ at stage 1 and pi represent his payoﬀ at stage 2. Then player i’s payoﬀ from the multi-stage game is ui = pi + δpi, where δ ∈ (0, 1) is the discount factor.
If the game in Fig. 20 (p. 33) is played once, there are three Nash equilibria, two asymmetric ones in pure strategies: B, A, A, B, and one symmetric in mixed strategies with σ (A) = 3/7, and σ (B) = 4/7.
How do we ﬁnd the MSNE? Suppose σ1 (C) 0 in a MSNE. We have several cases to consider.
First, suppose that σ1 (B) 0 as well; that is, player 1 puts positive weight on both B and C (and possibly A). Since he is willing to mix, it follows that 4σ2 (A) = 5σ2 (C) ⇒ σ2 (C) = 4/ σ (A). There are two ways to satisfy this requirement. Suppose σ (C) = σ (A) = 0, but in this case we obtain A, B. Suppose σ2 (C) 0, which implies σ2 (A) 0 too; that is, player 2 must be willing to mix between A and C (and possibly B). This now implies that 3σ1 (B) + 6σ1 (C) = 5σ1 (C) ⇒ 3σ1 (B) + σ1 (C) = 0, a contradiction because σ1 (C) 0.
Therefore, if σ1 (C) 0, then σ1 (B) = 0 as well. Second, suppose that σ1 (A) 0 as well; that is, player 1 puts positive weight on both A and C. Since he is willing to mix, it follows that 3σ2 (B) + 6σ2 (C) = 5σ2 (C) ⇒ 3σ2 (B) + σ2 (C) = 0, which implies σ2 (B) = σ2 (C) = 0, which means σ2 (A) = 1, but this means that player 1 will not mix and we have B, A. From all this, we conclude that σ1 (C) = 0 in MSNE. Analogously, symmetry gives us σ2 (C) = 0 too. This now reduces the game to the 2 × 2 variant in Fig. 21 (p. 34).
This is now very easy to deal with. Since player 1 is willing to mix, it follows that 3σ2 (B) = 4σ2 (A) and since σ1 (B) = 1 − σ2 (A), this gives us σ2 (A) = 3/7. Analogously, we obtain σ1 (A) = 3/7 and σ1 (B) = 4/7. The last thing we need to do is check that the players will not want to use C given the mixtures (we already know this from the argument above, but it does not hurt to recall the MSNE requirement). It suﬃces to check for player 1: if he plays C, his payoﬀ will be 0 given player 2’s strategy of playing only A and B with positive probability, which is strictly worse than the expected payoﬀ from either A or B. Hence, we do have our MSNE indeed. The payoﬀs in the three equilibria are (4, 3), (3, 4), and ( 12/7, 12/7) respectively.
The eﬃcient payoﬀ (5, 5) is not attainable in equilibrium if the game is played once.12
However, consider the following strategy proﬁle for the two-stage game:
• Player 1: play C at the ﬁrst stage. If the outcome is (C, C), play B at the second stage, otherwise play σ1 (A) = 3/7, σ1 (B) = 4/7 at the second stage;
• Player 2: play C at the ﬁrst stage. If the outcome is (C, C), play A at the second stage, otherwise play σ1 (A) = 3/7, σ2 (B) = 4/7 at the second stage.
Is it subgame perfect? Since the strategies at the second stage specify playing Nash equilibrium proﬁles for all possible second stages, the strategies are optimal there. At the ﬁrst stage players can deviate and increase their payoﬀs by 1 from 5 to 6 (either player can choose A). However, doing so results in playing the mixed strategy Nash equilibrium at the second stage, which lowers their payoﬀs to 12/7 from 4 for player 1 and from 3 for player 2. Thus, An outcome is eﬃcient if it is not possible to make some player better oﬀ without making the other one worse oﬀ. The outcomes with payoﬀs (0, 0) are all ineﬃcient, as are the outcomes with payoﬀs (4, 3) and (3, 4). However, the outcomes (6, 0) and (0, 6) are also eﬃcient.
player 1 will not deviate if:
We conclude that the strategy proﬁle speciﬁed above is a subgame perfect equilibrium if δ ≥ 7/9. In eﬀect, players can attain the non-Nash eﬃcient outcome at stage 1 by threatening to revert to the worst possible Nash equilibrium at stage 2. This technique will be very useful when analyzing inﬁnitely repeated games, where we shall see analogous results.
4.4 The Principle of Optimality
This is an important result that you will make heavy use of both for multi-stage games and for inﬁnitely repeated games that we shall look at next time. The principle states that to check whether a strategy proﬁle of a multi-stage game with observed actions is subgame perfect, it suﬃces to check whether there are any histories ht where some player i can proﬁt by deviating only from the actions prescribed by si (ht ) and conforming to si thereafter. In other words, for games with arbitrarily long (but ﬁnite) histories,13 it suﬃces to check if some player can proﬁt by deviating only at a single point in the game and then continuing to play his equilibrium strategy. That is, we do not have to check deviations that involve actions at several points in the game. You should be able to see how this simpliﬁes matters considerably.
The following theorem is variously called “The One-Shot (or One-Stage) Deviation Principle,” and is essentially the principle of optimality in dynamic programming. Since this is such a nice result and because it may not be obvious why it holds, we shall go through the proof.
The proof works as follows. You start from the last deviation in a sequence of multiple deviations and argue that it cannot be proﬁtable by itself, or else the OSDP would be violated.