Behavioral game theory

Behavioral game theory analyzes interactive strategic decisions and behavior using the methods of game theory,[1] experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom[2] and neglect of altruism,[3] fairness,[4] and framing effects.[5] As a research program, the subject is a development of the last three decades.[6]

Traditional game theory focuses on the mathematical structure of equilibria, and tends to use basic rational choice involving utility maximization. In contrast, behavioral game theory focuses on how actual behavior tends to deviate from standard predictions: how can we explain and model those deviations, and how can we make better predictions using more accurate models?[7] Choices studied in behavioral game theory are not always rational and do not always represent the utility maximizing choice.[8]

Behavioral game theory uses laboratory and field experiments, as well as theoretical and computational modeling.[8] Recently, methods from machine learning have been applied in work at the intersection of economics, psychology, and computer science to improve both prediction and understanding of behavior in games.[9][10]

History

Behavioral game theory began with the work of Allais in 1953 and Ellsberg in 1961. They discovered the Allais paradox and the Ellsberg paradox, respectively.[7] Both paradoxes show that choices made by participants in a game do not reflect the benefit they expect to receive from making those choices. In the 1970s the work of Vernon Smith showed that economic markets could be examined experimentally rather than only theoretically.[7] At the same time, several economists conducted experiments that discovered variations of traditional decision-making models such as regret theory, prospect theory, and hyperbolic discounting.[7] These discoveries showed that actual decision makers consider many factors when making choices. For example, a person may seek to minimize the amount of regret they will feel after making a decision and weigh their options based on the amount of regret they anticipate from each. Because they were not previously examined by traditional economic theory, factors such as regret along with many others fueled further research.

Beginning in the 1980s experimenters started examining the conditions that cause divergence from rational choice. Ultimatum and bargaining games examined the effect of emotions on predictions of opponent behavior. One of the most well known examples of an ultimatum game is the television show Deal or No Deal in which participants must make decisions to sell or continue playing based on monetary ultimatums given to them by “the banker.” These games also explored the effect of trust on decision-making outcomes and utility maximizing behavior.[11] Common resource games were used to experimentally test how cooperation and social desirability affect subject's choices. A real life example of a common resource game might be a party guest's decision to take from a food platter. The guests decisions would not only be affected by how hungry they are, but they would also be affected by how much of the shared resource, the food, is left and if the guest believes others would judge them for taking more. Experimenters during this period regarded behavior that did not maximize utility as the result of participant's flawed reasoning.[7] By the turn of the century economists and psychologists expanded this research. Models based on the rational choice theory were adapted to reflect decision maker preferences and attempt to rationalize choices that did not maximize utility.[7]

Comparison to traditional game theory

Traditional game theory uses theoretical models to determine the most beneficial choice of all players in a game.[12] Game theory uses rational choice theory along with assumptions of players' common knowledge in order to predict utility-maximizing decisions.[12] It also allows for players to predict their opponents' strategies.[13] Traditional game theory is a primarily normative theory as it seeks to pinpoint the decision rational players should choose, but does not attempt to explain why that decision was made.[13] Rationality is a primary assumption of game theory, so there are not explanations for different forms of rational decisions or irrational decisions.[13]

Behavioral game theory is a primarily positive theory rather than a normative theory.[13] A positive theory seeks to describe phenomena rather than prescribe a correct action. Positive theories must be testable and can be proven true or false. A normative theory is subjective and based on opinions. Because of this, normative theories cannot be proven true or false. Behavioral game theory attempts to explain decision making using experimental data.[13] The theory allows for rational and irrational decisions because both are examined using real-life experiments. Specifically, behavioral game theory attempts to explain factors that influence real world decisions.[13] These factors are not explored in the area of traditional game theory, but can be postulated and observed using empirical data.[13] Findings from behavioral game theory will tend to have higher external validity and can be better applied to real world decision-making behavior.[13]

Examples of games used in behavioral game theory research

Factors that affect rationality in games

Beliefs

Beliefs about other people in a decision-making game are expected to influence ones ability to make rational choices. However, beliefs of others can also cause experimental results to deviate from equilibrium, utility-maximizing decisions. In an experiment by Costa-Gomez (2008) participants were questioned about their first order beliefs about their opponent's actions prior to completing a series of normal-form games with other participants.[17] Participants complied with Nash Equilibrium only 35% of the time. Further, participants only stated beliefs that their opponents would comply with traditional game theory equilibrium 15% of the time.[17] This means participants believed their opponents would be less rational than they really were. The results of this study show that participants do not choose the utility-maximizing action and they expect their opponents to do the same.[17] Also, the results show that participants did not choose the utility-maximizing action that corresponded to their beliefs about their opponent's action.[17] While participants may have believed their opponent was more likely to make a certain decision, they still made decisions as if their opponent was choosing randomly.[17] Another study that examined participants from the TV show Deal or No Deal found divergence from rational choice.[18] Participants were more likely to base their decisions on previous outcomes when progressing through the game.[18] Risk aversion decreased when participants' expectations were not met within the game. For example, a subject that experienced a string of positive outcomes was less likely to accept the deal and end the game. The same was true for a subject that experienced primarily negative outcomes early in the game.[18]

Social cooperation

Social behavior and cooperation with other participants are two factors that are not modeled in traditional game theory, but are often seen in an experimental setting. The evolution of social norms has been neglected in decision-making models, but these norms influence the ways in which real people interact with one another and make choices.[11] One tendency is for a person to be a strong reciprocator.[11] This type of person enters a game with the predisposition to cooperate with other players. They will increase their cooperation levels in response to cooperation from other players and decrease their cooperation levels, even at their own expense, to punish players who do not cooperate.[11] This is not payoff-maximizing behavior, as a strong reciprocator is willing to reduce their payoff in order to encourage cooperation from others.

Dufwenberg and Kirchsteiger (2004) developed a model based on reciprocity called the sequential reciprocity equilibrium. This model adapts traditional game theory logic to the idea that players reciprocate actions in order to cooperate.[19] The model had been used to more accurately predict experimental outcomes of classic games such as the prisoner's dilemma and the centipede game. Rabin (1993) also created a fairness equilibrium that measures altruism's effect on choices.[20] He found that when a player is altruistic to another player the second player is more likely to reciprocate that altruism.[20] This is due to the idea of fairness.[20] Fairness equilibriums take the form of mutual maximum, where both players choose an outcome that benefits both of them the most, or mutual minimum, where both players choose an outcome that hurts both of them the most.[20] These equilibriums are also Nash equilibriums, but they incorporate the willingness of participants to cooperate and play fair.

Incentives, consequences, and deception

The role of incentives and consequences in decision-making is interesting to behavioral game theorists because it affects rational behavior. Post (2008) analyzed Deal or no Deal contestant behavior in order to reach conclusions about decision-making when stakes are high.[18] Studying the contestant's choices formed the conclusion that in a sequential game with high stakes decisions were based on previous outcomes rather than rationality.[18] Players who face a succession of good outcomes, in this case they eliminate the low-value cases from play, or players who face a succession of poor outcomes become less risk averse.[18] This means that players who are having exceptionally good or exceptionally bad outcomes are more likely to gamble and continue playing than average players. The lucky or unlucky players were willing to reject offers of over one hundred percent of the expected value of their case in order to continue playing.[18] This shows a shift from risk avoiding behavior to risk seeking behavior. This study highlights behavioral biases that are not accounted for by traditional game theory. Riskier behavior in unlucky contestants can be attributed to the break-even effect, which states that gamblers will continue to make risky decisions in order to win back money.[18] On the other hand, riskier behavior in lucky contestants can be explained by the house-money effect, which states that winning gamblers are more likely to make risky decisions because they perceive that they are not gambling with their own money.[18] This analysis shows that incentives influence rational choice, especially when players make a series of decisions.

Incentives and consequences also play a large role in deception in games. Gneezy (2005) studied deception using a cheap talk sender-receiver game.[21] In this type of game player one receives information about the payouts of option A and option B. Then, player one gives a recommendation to player two about which option to take. Player one can choose to deceive player two, and player two can choose to reject player one's advice. Gneezy found that participants were more sensitive to their gain from lying than to their opponent's loss.[21] He also found that participants were not wholly selfish and cared about how much their opponents lost from their deception, but this effect diminished as their own payout increased.[21] These findings show that decision makers examine both incentives to lie and consequences of lying in order to decide whether or not to lie. In general people are averse to lying, but given the right incentives they tend to ignore consequences.[21] Wang (2009) also used a cheap talk game to study deception in participants with an incentive to deceive.[22] Using eye tracking he found that participants who received information about payoffs focused on their own payoff twice as often as their opponents.[22] This suggests minimal strategic thinking. Further, participants' pupils dilated when they sent a deceiving, and they dilated more when telling a bigger lie.[22] Through these physical cues Wang concluded that deception is cognitively difficult.[22] These findings show that factors such as incentives, consequences, and deception can create irrational decisions and affect the way games unfold.

Group decisions

Behavioral game theory considers the effects of groups on rationality. In the real world many decisions are made by teams, yet traditional game theory uses an individual as a decision maker. This created a need to model group decision-making behavior. Bornstein and Yaniv (1998) examined the difference in rationality between groups and individuals in an ultimatum game.[23] In this game player one (or group one) decides what percentage of a payout to give to player two (or group two) and then player two decides whether to accept or reject this offer. Participants in the group condition were put in groups of three and allowed to deliberate on their decisions.[23] Perfect rationality in this game would be player one offering player two none of the payout, but that is almost never the case in observed offers. Bornstein and Yaniv found that groups were less generous, willing to give up a smaller portion of the payoff, in the player one condition and more accepting of low offers in the player two condition than individuals.[23] These results suggest that groups are more rational than individuals.[23]

Kocher and Sutter (2005) used a beauty contest game to study and compare individual and group behavior.[24] A beauty contest game is one where all participants choose a number between zero and one hundred. The winner is the participant who chooses a number closest to two thirds of the average number. In the first round the rational choice would be thirty-three, as it is two thirds of the average number, fifty. Given an infinite number of rounds all participants should choose zero according to game theory. Kocher and Sutter found that groups did not perform more rationally than individuals in the first round of the game.[24] However, groups performed more rationally than individuals in subsequent rounds.[24] This shows that groups are able to learn the game and adapt their strategy faster than individuals.

See also

References

  1. R. J. Aumann (2008). "game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  2. Camerer, Colin; Ho, Teck-Hua (March 1994). "Violations of the betweenness axiom and nonlinearity in probability". Journal of Risk and Uncertainty. 8 (2): 167–196. doi:10.1007/bf01065371. S2CID 121396120.
  3. James Andreoni et al. (2008). "altruism in experiments," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  4. H. Peyton Young (2008). "social norms," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  5. Camerer, Colin (1997). "Progress in behavioral game theory". Journal of Economic Perspectives. 11 (4): 172. doi:10.1257/jep.11.4.167. Archived from the original on 2017-12-23. Retrieved 2015-04-27. Pdf version.
  6. Camerer, Colin (2003). Behavioral game theory: experiments in strategic interaction. New York, New York Princeton, New Jersey: Russell Sage Foundation Princeton University Press. ISBN 9780691090399. Description, preview ([ctrl]+), and ch. 1 link.
       * _____, George Loewenstein, and Matthew Rabin, ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description, contents, and preview.
       * Drew Fudenberg (2006). "Advancing Beyond Advances in Behavioral Economics," Journal of Economic Literature, 44(3), pp. 694–711.
       * Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242. Cambridge. Reprinted in Camerer et al. (2003), Advances in Behavioral Economics, Princeton, ch. 12.
       * Martin Shubik (2002). "Game Theory and Experimental Gaming," in R. Aumann and S. Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327–2351. Abstract.
      Charles R. Plott and Vernon L. Smith, ed. (2008). Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games preview and ch. 45–66 preview links.
       * Games and Economic Behavior, Elsevier. Aims and scope.
  7. Gintis, H. (2005). Behavioral game theory and contemporary economic theory. Analyse & Kritik, 27(1), 48-72.
  8. Camerer, C. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton University Press.
  9. Wright, James R., and Kevin Leyton-Brown (2014). "Level-0 meta-models for predicting human behavior in games". Proceedings of the Fifteenth ACM Conference on Economics and Computation.CS1 maint: multiple names: authors list (link)
  10. Fudenberg, Drew; Liang, Annie (2019-12-01). "Predicting and Understanding Initial Play" (PDF). American Economic Review. 109 (12): 4112–4141. doi:10.1257/aer.20180654. ISSN 0002-8282.
  11. Gintis, H. (2009). The bounds of reason: Game theory and the unification of the behavioral sciences. Princeton University Press.
  12. Osborne, M. J., & Rubinstein, A. (1994). A course in game theory. MIT press.
  13. Colman, A. M. (2003). Cooperation, psychological game theory, and limitations of rationality in social interaction. Behavioral and Brain Sciences, 26(02), 139-153.
  14. Gächter, Simon. "Behavioral Game Theory" (PDF). ict.usc.edu. Retrieved 18 December 2018.
  15. Camerer, Colin (1997). "Progress in Behavioral Game Theory". Journal of Economic Perspectives. 11 (4): 167–188. doi:10.1257/jep.11.4.167. S2CID 16850487.
  16. "(Behavioral) Game theory". behavioraleconomics.com. Retrieved 18 December 2018.
  17. Costa-Gomes, M. A., & Weizsäcker, G. (2008). Stated beliefs and play in normal-form games. The Review of Economic Studies, 75(3), 729-762.
  18. Post, T., Van den Assem, M. J., Baltussen, G., & Thaler, R. H. (2008). Deal or no deal? Decision making under risk in a large-payoff game show. The American economic review, 38-71.
  19. Dufwenberg, M., & Kirchsteiger, G. (2004). A theory of sequential reciprocity. Games and economic behavior, 47(2), 268-298.
  20. Rabin, M. (1993). Incorporating fairness into game theory and economics. The American economic review, 1281-1302. (Incorporates social motives into game theory decision making)
  21. Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 384-394.
  22. Wang, J. T. Y., Spezio, M., & Camerer, C. (2009). Pinocchio's pupil: Using eyetracking and pupil dilation to understand truth-telling and deception in sender-receiver game. American Economic Review, Forthcoming.
  23. Bornstein, G., & Yaniv, I. (1998). Individual and group behavior in the ultimatum game: Are groups more “rational” players?. Experimental Economics, 1(1), 101-108.
  24. Kocher, M. G., & Sutter, M. (2005). The Decision Maker Matters: Individual Versus Group Behaviour in Experimental Beauty‐Contest Games*. The Economic Journal, 115(500), 200-223.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.