Expected utility hypothesis

The expected utility hypothesis is a popular concept in economics, game theory and decision theory that serves as a reference guide for judging decisions involving uncertainty.[1] The theory recommends which option a rational individual should choose in a complex situation, based on his tolerance for risk and personal preferences.

The expected utility of an agent's risky decision is the mathematical expectation of his utility from different outcomes given their probabilities. If an agent derives 0 utils from 0 apples, 2 utils from one apple, and 3 utils from two apples, his expected utility for a 50-50 gamble between zero apples and two is .5u(0 apples) + .5u(2 apples) = .5(0 utils) + .5(3 utils) = 1.5 utils. Under the expected utility hypothesis, the consumer would prefer 1 apple with certainty (giving him 2 utils) to the gamble between zero and two.

Standard utility functions represent ordinal preferences. The expected utility hypothesis imposes limitations on the utility function and makes utility cardinal (though still not comparable across individuals). In the example above, any function such that u(0) < (1) < u(2) would represent the same preferences; we could specify u(0)= 0, u(1) = 2, and u(2) = 40, for example. Under the expected utility hypothesis, setting u(2) = 3 requires if the agent is indifferent between one apple with certainty and a gamble with a 1/3 probability of no apple and a 2/3 probability of two apples, the utility of two apples must be set to u(2) = 2. This is because it requires that (1/3)u(0) + (2/3)u(2) = u(1), and 2 = (1/3)(0) + (2/3)(3).

The idea has antecedents in Daniel Bernoulli's 1738 St. Petersburg Paradox,[2] and was developed by Frank Ramsey and Leonard Jimmie Savage. The von Neumann–Morgenstern utility theorem provides necessary and sufficient conditions under which the expected utility hypothesis holds. From relatively early on, it was accepted that some of these conditions would be violated by real decision-makers in practice but that the conditions could be interpreted nonetheless as 'axioms' of rational choice. Until the mid-twentieth century, the standard term for the expected utility was the moral expectation, contrasted with "mathematical expectation" for the expected value.[3]

Although the expected utility hypothesis is standard in economic modelling, largely because of its simplicity and convenience, it has been found to be violated in psychology experiments. For many years, psychologists and economic theorists have been developing new theories to explain these deficiencies.[4] These include prospect theory, rank-dependent expected utility and cumulative prospect theory.

Antecedents

Limits of the Expected Value Theory

In the early days of the calculus of probability it was taken for granted that the value, and hence the ‘‘fair price’’ of a gamble, was the mathematical expectation of the gain.[2]  Classic utilitarians believed that the option who has the greatest utility will produce more pleasure or happiness for the agent and consequently must be chosen.[5] The main problem with the expected value theory is that there might not be a unique correct way to quantify utility or to identify the best trade-offs. Rather than monetary incentives, other desirable ends can also be included in utility such as pleasure, knowledge, friendship, etc. Originally the total utility of the consumer was the sum of independent utilities of the goods. However, the expected value theory was dropped as it was considered too static and deterministic.[6] The classical counter example to the expected value theory (where everyone makes the same "correct: choice) is the St. Petersburg Paradox. This paradox questioned if marginal utilities should be ranked differently as it proved that a “correct decision” for one person is not necessarily right for another person.[6]

Risk Aversion

Risk aversion

The expected utility theory takes into account that individuals may be risk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions are concave and show diminishing marginal wealth utility. The risk attitude is directly related to the curvature of the utility function: risk neutral individuals have linear utility functions, while risk seeking individuals have convex utility functions and risk averse individuals have concave utility functions. The degree of risk aversion can be measured by the curvature of the utility function.

Since the risk attitudes are unchanged under affine transformations of u, the second derivative u'' is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt[7][8] measure of absolute risk aversion:

where is wealth.

The Arrow–Pratt measure of relative risk aversion is:

Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. They are often used in economics for simplification.

A decision that maximizes expected utility also maximizes the probability of the decision's consequences being preferable to some uncertain threshold (Castagnoli and LiCalzi, 1996; Bordley and LiCalzi, 2000; Bordley and Kirkwood). In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold.

The St. Petersburg Paradox

The St. Petersburg paradox paradox created by Daniel Bernoulli (cousin of Nicolas Bernoulli) empirically discovered that the decisions of rational individuals sometimes violate the axioms of preferences.[2] When a probability distribution function has an infinite expected value, it is expected that a rational person would pay an arbitrarily large finite amount to take this gamble. However, this experiment demonstrated that there is no upper bound on the potential rewards from very low probability events. In his experimental game, a person had to flip a coin as many times as possible until it was tails. The participant's prize will be determined by the number of times the coin was turned heads consecutively. For every time the coin comes up heads (1/2 probability), the participant will win $2. The game ends when the participant flips the coin and it comes out a tail. According to the axioms of preferences, a player should be willing to pay a high price to play because his entry cost will always be less than the expected value of the game, since he could potentially win an infinite payout. However, in reality, people don't do this. “Only a few of the participants were willing to pay a maximum of $25 to enter the game because many of them were risk averse and unwilling to bet on a very small possibility at a very high price.[9]

Bernoulli's Formulation

Nicolas Bernoulli described the St. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already-wealthy person than it would be to a poor person. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone, He proposed that a nonlinear function of utility of an outcome should be used instead of the expected value of an outcome, accounting for risk aversion, where the risk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to instead maximize the logarithm of his gain.

Nicolas Bernoulli Bernoulli drew attention to psychological and behavioral behind the individual's decision-making process and found that the utility of wealth has a diminishing marginal utility. For example, as a someone gets wealthier, an extra dollar or an additional good is perceived as less valuable. In other words, he found that the desirability related with a financial gain depends not only on the gain itself buy also on the wealth of the person.  He suggested that people maximize "moral expectation" rather than expected monetary value. Bernoulli made a clear distinction between expected value and expected utility. Instead of using the weighted outcomes, used the weighted utility multiplied by probabilities' He proved that the utility function used in real life means is finite, even when its expected value is infinite.[6]

Other experiments proposed that very low probability events are neglected by considering the finite resources of the participants. For example, it makes rational sense for a rich man, but not for a poor person to pay a 10,000USD in exchange for a lottery ticket that yield a 50% of winning and a 50% of nothing.  Even both individuals have the same chance at each monetary price, they will have different values according to their income levels. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory.

Ramsey's Theoretic Approach to Subjective Probability

In 1926, Frank Ramsey introduced the Ramsey's Representation Theorem. This representation theorem for expected utility assumed that preferences are defined over set of bets where each option has a different yield. Ramsey believed that we always choose decisions to receive the best expected outcome according to our personal preferences.This implies that if we are able to understand the priorities and personal preferences of an individual we can anticipate what choices they are going to take.[10] In this model he defined numerical utilities for each option to exploit the richness of the space of prices. The outcome of each preference is exclusive from each other. For example, if you study, then you can't see your friends, however you will get a good grade in your course. In this scenario, if we analyze what are his personal preferences and beliefs we will be able to predict which he might choose. (e.g. if someone prioritizes more his social life than academic results, he will go out with her friends). Assuming that the decisions of a person are rational, according to this theorem we should be able to know the beliefs and utilities from a person just by looking the choices someone takes (which is wrong). Ramsey defines a proposition as “ethically neutral” when two possible outcome has an equal value. In other words, if the probability can be defined in terms of preference, each proposition should have ½ in order to be indifferent between both options.[11] Ramsey shows that:

P(E)=(1−U(m)(U(b)−U(w))[12]

Savage's Subjective Expected Utility Representation

In the 1950s, Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. At that point, it was considered the first and most thorough foundation to understanding the concept. Savage's framework involved proving that expected utility could be used to make an optimal choice among several acts through seven axioms.[13] In his book, The Foundations of Statistics, Savage integrated a normative account of decision making under risk (when probabilities are known) and under uncertainty (when probabilities are not objectively known).  Savage concluded that people have neutral attitudes towards uncertainty and that observation is enough to predict the probabilities of uncertain events.  [14] A crucial methodological aspect of Savage's framework is its focus on observable choices. Cognitive processes and other psychological aspects of decision making matter only to the extent that they have directly measurable implications on choice.

The theory of subjective expected utility combines two concepts: first, a personal utility function, and second a personal probability distribution (usually based on Bayesian probability theory).  This theoretical model has been known for its clear and elegant structure and its considered for some researchers one of “the most brilliant axiomatic theory of utility ever developed”.[15] Instead assuming the probability of an event, Savage defines it in terms of preferences over acts.  Savage used the states (something that is not in your control) to calculate the probability of an event. On the other hand, he used utility and intrinsic preferences to predict the outcome of the event. Savage assumed that each act and state are enough to uniquely determine an outcome. However, this assumption breaks in the cases where the individual doesn't have enough information about the event.

Additionally, he believed that outcomes must have the same utility regardless of the state. For that reason, it is essential to correctly identify which statement is considered an outcome. For example, if someone says “I got the job” this affirmation is not considered an outcome, since the utility of the statement will be different on each person depending on intrinsic factors such as financial necessity or judgments about the company. For that reason, no state can rule out the performance of any act, only when the state and the act are evaluated simultaneously you will be able to determine an outcome with certainty.[16]

Savage's Representation Theorem

Savage Representation Theorem (Savage, 1954) A preference < satisfies P1-P7 if and only if there is a finitely additive probability measure P and a function u : C → R such that for every pair of acts f and g.[16]

f < g ⇐⇒ Z Ω u(f(ω)) dP ≥ Z Ω u(g(ω)) dP  [16]

*If and only if all the axioms are satisfied when can used the information to reduce the uncertainty about the events that are out of your control. Additionally the theorem ranks the outcome according to utility function that reflects the personal preferences.

Key Ingredients:

The key ingredients in Savage's theory are:

  • States: The specification of every aspect of the decision problem at hand or  “A description of the world leaving no relevant aspect undescribed.”[13]
  • Events: A set of states identified by someone
  • Consequences: A consequence is the description of all that is relevant to the decision maker's utility (e.g. monetary rewards, psychological factors, etc)
  • Acts: An act is a finite-valued function that maps states to consequences.

Von Neumann–Morgenstern utility theorem

The von Neumann–Morgenstern axioms

There are four axioms of the expected utility theory that define a rational decision maker. They are completeness, transitivity, independence and continuity.[17]

Completeness assumes that an individual has well defined preferences and can always decide between any two alternatives.

  • Axiom (Completeness): For every A and B either or or both.

This means that the individual prefers A to B, B to A or is indifferent between A and B.

Transitivity assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently.

  • Axiom (Transitivity): For every A, B and C with and we must have .

Independence of irrelevant alternatives pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial axiom..

  • Axiom (Independence of irrelevant alternatives): Let A, B, and C be three lotteries with , and let be the probability that a third choice is present: ;
    if then the third choice, C, is irrelevant, and the order of preference for A before B holds, independently of the presence of C.

Continuity assumes that when there are three lotteries (A, B and C) and the individual prefers A to B and B to C, then there should be a possible combination of A and C in which the individual is then indifferent between this mix and the lottery B.

  • Axiom (Continuity): Let A, B and C be lotteries with ; then there exists a probability p such that B is equally good as .

If all these axioms are satisfied, then the individual is said to be rational and the preferences can be represented by a utility function, i.e. one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference amounts to choosing the lottery with the highest expected utility. This result is called the von Neumann–Morgenstern utility representation theorem.

In other words, if an individual's behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also referred to as von Neumann–Morgenstern (vNM) utility functions. This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value, but rather the highest expected utility. The expected utility maximizing individual makes decisions rationally based on the axioms of the theory.

The von Neumann–Morgenstern formulation is important in the application of set theory to economics because it was developed shortly after the Hicks–Allen "ordinal revolution" of the 1930s, and it revived the idea of cardinal utility in economic theory. However, while in this context the utility function is cardinal, in that implied behavior would be altered by a non-linear monotonic transformation of utility, the expected utility function is ordinal because any monotonic increasing transformation of expected utility gives the same behavior.

Examples of von Neumann–Morgenstern utility functions

The utility function was originally suggested by Bernoulli (see above). It has relative risk aversion constant and equal to one, and is still sometimes assumed in economic analyses. The utility function

exhibits constant absolute risk aversion, and for this reason is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function gives exactly the same preferences orderings as does ; thus it is irrelevant that the values of and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities.

The class of constant relative risk aversion utility functions contains three categories. Bernoulli's utility function

has relative risk aversion equal to 1. The functions

for have relative risk aversion equal to . And the functions

for have relative risk aversion equal to

See also the discussion of utility functions having hyperbolic absolute risk aversion (HARA).

Formula for Expected Utility

When the entity whose value affects a person's utility takes on one of a set of discrete values, the formula for expected utility, which is assumed to be maximized, is

where the left side is the subjective valuation of the gamble as a whole, is the ith possible outcome, is its valuation, and is its probability. There could be either a finite set of possible values in which case the right side of this equation has a finite number of terms; or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms.

When can take on any of a continuous range of values, the expected utility is given by

where is the probability density function of

Measuring risk in the expected utility context

Often people refer to "risk" in the sense of a potentially quantifiable entity. In the context of mean-variance analysis, variance is used as a risk measure for portfolio return; however, this is only valid if returns are normally distributed or otherwise jointly elliptically distributed,[18][19][20] or in the unlikely case in which the utility function has a quadratic form. However, David E. Bell proposed a measure of risk which follows naturally from a certain class of von Neumann–Morgenstern utility functions.[21] Let utility of wealth be given by

for individual-specific positive parameters a and b. Then expected utility is given by

Thus the risk measure is , which differs between two individuals if they have different values of the parameter allowing different people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on given value of a) may choose different portfolios because they may have different values of b. See also Entropic risk measure.

For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters with one representing the expected value of the variable in question and the other representing its risk.

Criticism

Expected utility theory is a theory about how to make optimal decisions under risk. It has a normative interpretation which economists particularly used to think applies in all situations to rational agents but now tend to regard as a useful and insightful first order approximation. In empirical applications, a number of violations have been shown to be systematic and these falsifications have deepened understanding of how people actually decide. Daniel Kahneman and Amos Tversky in 1979 presented their prospect theory which showed empirically, among other things, how preferences of individuals are inconsistent among the same choices, depending on how those choices are presented.[22] This is mainly because people are different in terms of their preference and their parameters. Additionally, personal behaviors may be different between individuals even when they are facing the same choice problem.

Like any mathematical model, expected utility theory is an abstraction and simplification of reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice. The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy, and to distinguish systematic departures from its predictions. This has led to the field of behavioral finance, which has produced deviations from expected utility theory to account for the empirical facts.

Conservatism in updating beliefs

Psychologists have discovered systematic violations of probability calculations and behavior by humans. This have been evidenced with examples such as Monty Hall problem where it was demonstrated that people do not revise their degrees on belief in line with experimented probabilities and also that probabilities cannot be apply to single cases. On the other hand, in updating probability distributions using evidence, a standard method uses conditional probability, namely the rule of Bayes. An experiment on belief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment.[23]

According to the empirical results there has been almost no recognition in decision theory of the distinction between the problem of justifying its theoretical claims regarding the properties of rational belief and desire. One of the main reasons is because people's basic tastes and preferences for losses cannot be represented with utility as they change under different scenarios.[24]

Irrational deviations

Behavioral finance has produced several generalized expected utility theories to account for instances where people's choices deviate from those predicted by expected utility theory. These deviations are described as "irrational" because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved. Particular theories include prospect theory, rank-dependent expected utility and cumulative prospect theory are considered insufficient to predict preferences and the expected utility.[25] Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. This is because preferences and utility functions constructed under different contexts are significantly different. This is demonstrated in the contrast of individual preferences under the insurance and lottery context shows the degree of indeterminacy of the expected utility theory. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern.

In practice there will be many situations where the probabilities are unknown, and one is operating under uncertainty. In economics, Knightian uncertainty or ambiguity may occur. Thus one must make assumptions about the probabilities, but then the expected values of various decisions can be very sensitive to the assumptions. This is particularly a problem when the expectation is dominated by rare extreme events, as in a long-tailed distribution. Alternative decision techniques are robust to uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiring scenario analysis (as in minimax or minimax regret), or being less sensitive to assumptions.

Bayesian approaches to probability treat it as a degree of belief and thus they do not draw a distinction between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e. where the uncertain probabilities are modelled as distributions whose parameters are themselves drawn from a higher-level distribution (hyperpriors).

Preference reversals over uncertain outcomes

Starting with studies such as Lichtenstein & Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals with regard to their certainty equivalents of different lotteries. Specifically, when eliciting certainty equivalents, subjects tend to value "p bets" (lotteries with a high chance of winning a low prize) lower than "$ bets" (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the "p bets" over "$ bets".[26] Many studies have examined this "preference reversal", from both an experimental (e.g., Plott & Grether, 1979)[27] and theoretical (e.g., Holt, 1986)[28] standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions.

The problem of interpersonal utility comparisons

Understanding utilities in term of personal preferences is really challenging as it face a challenge known as the Problem of Interpersonal Utility Comparisons or the Social Welfare Function. It is frequently pointed out that ordinary people usually make comparisons, however such comparisons are empirically meaningful because the interpersonal comparisons does not show the desire of strength which is extremely relevant to measure the expected utility of decision. In other words, beside we can know X and Y has similar or identical preferences (e.g. both love cars) we cannot determine which love it more or is willing to sacrifice more to get it.[29][30]

Recommendations

In conclusion Expected Utility theories such as Savage and von Neumann-Morgenstern have to be improved or replaced by more general representations theorems.

There are three components in the psychology field that are seen as crucial to the development of a more accurate descriptive theory of decision under risks.[24][1] It is important the psychologist who study subjective Bayesian reasoning should carefully formulate the statement without ambiguity to avoid confusions.

1)     Theory of decision framing effect (psychology)

2)     Better understanding of the psychologically relevant outcome space

3)     A psychologically richer theory of the determinants

Mixture models of choice under risk

In this model Conte(2011) found that there is heterogeneity in behaviour between individuals and within individuals. Applying a Mixture Model fits the data significantly better than either of the two preference functionals individually.[31] Additionally it helps to estimate preferences much more accurately than the old economic models because it takes heterogeneity into account. In other words, the model assumes that different agents in the population have different functionals. The model estimate the proportion of each group to consider all forms of heterogeneity.

Psychological Expected Utility Model:[32]

In this model, Caplin(2001) expanded the standard prize space to include anticipatory emotions such suspense and anxiety influence on preferences and decisions. The author have replaced the standard prize space with a space of "psychological states," In this research, they open up a variety of psychologically interesting phenomena to rational analysis. This model explained how time inconsistency arises naturally in the presence of anticipations and also how this preceded emotions may change the result of choices, For example, this model founds that anxiety is anticipatory and that the desire to reduce anxiety motivates many decisions. A better understanding of the psychologically relevant outcome space will facilitate theorists to develop richer theory of determinants.

See also

References

  1. Schoemaker PJ (1980). "Experiments on Decisions under Risk: The Expected Utility Hypothesis". doi:10.1007/978-94-017-5040-0. Cite journal requires |journal= (help)
  2. Aase KK (January 2001). "On the St. Petersburg Paradox". Scandinavian Actuarial Journal. 2001 (1): 69–78. doi:10.1080/034612301750077356. ISSN 0346-1238.
  3. "Moral expectation", under Jeff Miller, Earliest Known Uses of Some of the Words of Mathematics (M) Archived 2011-05-11 at the Wayback Machine, accessed 2011-03-24. The term "utility" was first introduced mathematically in this connection by Jevons in 1871; previously the term "moral value" was used.
  4. Conte A, Hey JD, Moffatt PG (May 2011). "Mixture models of choice under risk". Journal of Econometrics. 162 (1): 79–88. doi:10.1016/j.jeconom.2009.10.011.
  5. Oberhelman DD (June 2001). Zalta EN (ed.). "Stanford Encyclopedia of Philosophy". Reference Reviews. 15 (6): 9–9. doi:10.1108/rr.2001.15.6.9.311.
  6. Allais M, Hagen O, eds. (1979). Expected Utility Hypotheses and the Allais Paradox. Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-7629-1. ISBN 978-90-481-8354-8.
  7. Arrow KJ (1965). "The theory of risk aversion". In Saatio YJ (ed.). Aspects of the Theory of Risk Bearing Reprinted in Essays in the Theory of Risk Bearing. Chicago, 1971: Markham Publ. Co. pp. 90–109.CS1 maint: location (link)
  8. Pratt JW (January–April 1964). "Risk aversion in the small and in the large". Econometrica. 32 (1/2): 122–136. doi:10.2307/1913738. JSTOR 1913738.
  9. "The St. Petersburg Paradox". Stanford Encyclopedia of Philosophy. 16 June 2008.
  10. Bradley R (2004). "Ramsey's Representation Theorem" (PDF). Dialectica. 58: 483–498.
  11. Elliott E. "Ramsey and the Ethically Neutral Proposition" (PDF). Australian National University.
  12. Briggs RA (2014-08-08). "Normative Theories of Rational Choice: Expected Utility". Cite journal requires |journal= (help)
  13. Savage LJ (March 1951). "The Theory of Statistical Decision". Journal of the American Statistical Association. 46 (253): 55–67. doi:10.1080/01621459.1951.10500768. ISSN 0162-1459.
  14. Lindley DV (September 1973). "The foundations of statistics (second edition), by Leonard J. Savage. Pp xv, 310. £1·75. 1972 (Dover/Constable)". The Mathematical Gazette. 57 (401): 220–221. doi:10.1017/s0025557200132589. ISSN 0025-5572.
  15. "1. Foundations of probability theory", Interpretations of Probability, Berlin, New York: Walter de Gruyter, 2009-01-21, doi:10.1515/9783110213195.1, ISBN 978-3-11-021319-5
  16. Li Z, Loomes G, Pogrebna G (2017-05-01). "Attitudes to Uncertainty in a Strategic Setting". The Economic Journal. 127 (601): 809–826. doi:10.1111/ecoj.12486. ISSN 0013-0133.
  17. von Neumann J, Morgenstern O (1953) [1944]. Theory of Games and Economic Behavior (Third ed.). Princeton, NJ: Princeton University Press.
  18. Borch K (January 1969). "A note on uncertainty and indifference curves". Review of Economic Studies. 36 (1): 1–4. doi:10.2307/2296336. JSTOR 2296336.
  19. Chamberlain G (1983). "A characterization of the distributions that imply mean-variance utility functions". Journal of Economic Theory. 29 (1): 185–201. doi:10.1016/0022-0531(83)90129-1.
  20. Owen J, Rabinovitch R (1983). "On the class of elliptical distributions and their applications to the theory of portfolio choice". Journal of Finance. 38 (3): 745–752. doi:10.2307/2328079. JSTOR 2328079.
  21. Bell DE (December 1988). "One-switch utility functions and a measure of risk". Management Science. 34 (12): 1416–24. doi:10.1287/mnsc.34.12.1416.
  22. Kahneman D, Tversky A. "Prospect Theory: An Analysis of Decision under Risk". Econometrica. 47 (2): 263–292.
  23. Subjects changed their beliefs faster by conditioning on evidence (Bayes's theorem) than by using informal reasoning, according to a classic study by the psychologist Ward Edwards:
    • Edwards W (1968). "Conservatism in Human Information Processing". In Kleinmuntz, B (ed.). Formal Representation of Human Judgment. Wiley.
    • Edwards W (1982). "Conservatism in Human Information Processing (excerpted)". In Daniel Kahneman, Paul Slovic and Amos Tversky (ed.). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
    • Phillips LD, Edwards W (October 2008). "Chapter 6: Conservatism in a simple probability inference task (Journal of Experimental Psychology (1966) 72: 346-354)". In Weiss JW, Weiss DJ (eds.). A Science of Decision Making:The Legacy of Ward Edwards. Oxford University Press. p. 536. ISBN 978-0-19-532298-9.
  24. Vind K (February 2000). "von Neumann Morgenstern preferences". Journal of Mathematical Economics. 33 (1): 109–122. doi:10.1016/s0304-4068(99)00004-x. ISSN 0304-4068.
  25. Baratgin J (2015-08-11). "Rationality, the Bayesian standpoint, and the Monty-Hall problem". Frontiers in Psychology. 6: 1168. doi:10.3389/fpsyg.2015.01168. PMC 4531217. PMID 26321986.
  26. Lichtenstein S, Slovic P (1971). "Reversals of preference between bids and choices in gambling decisions". Journal of Experimental Psychology. 89 (1): 46–55. doi:10.1037/h0031207. hdl:1794/22312.
  27. Grether DM, Plott CR (1979). "Economic Theory of Choice and the Preference Reversal Phenomenon". American Economic Review. 69 (4): 623–638. JSTOR 1808708.
  28. Holt C (1986). "Preference Reversals and the Independence Axiom". American Economic Review. 76 (3): 508–515. JSTOR 1813367.
  29. List C (2003). "List C. Are interpersonal comparisons of utility indeterminate?". Erkenntnis. 58 (2): 229–260. doi:10.1023/a:1022094826922. ISSN 0165-0106.
  30. Rossi M (April 2014). "Simulation theory and interpersonal utility comparisons reconsidered". Synthese. 191 (6): 1185–1210. doi:10.1007/s11229-013-0318-9. ISSN 0039-7857.
  31. Conte A, Hey JD, Moffatt PG (May 2011). "Mixture models of choice under risk". Journal of Econometrics. 162 (1): 79–88. doi:10.1016/j.jeconom.2009.10.011.
  32. Caplin A, Leahy J (2001-02-01). "Psychological Expected Utility Theory and Anticipatory Feelings". The Quarterly Journal of Economics. 116 (1): 55–79. doi:10.1162/003355301556347. ISSN 0033-5533.

Further reading

  • Anand P (1993). Foundations of Rational Choice Under Risk. Oxford: Oxford University Press. ISBN 978-0-19-823303-9.
  • Arrow KJ (1963). "Uncertainty and the Welfare Economics of Medical Care". American Economic Review. 53: 941–73.
  • de Finetti B (September 1989). "Probabilism: A Critical Essay on the Theory of Probability and on the Value of Science (translation of 1931 article)". Erkenntnis. 31.
  • de Finetti B (1937). "La Prévision: ses lois logiques, ses sources subjectives". Annales de l'Institut Henri Poincaré.
de Finetti B (1964). "Foresight: its Logical Laws, Its Subjective Sources (translation of the 1937 article in French". In Kyburg HE, Smokler HE (eds.). Studies in Subjective Probability. New York: Wiley.
  • de Finetti B (1974). Theory of Probability. Translated by Smith AF. New York: Wiley.
  • Morgenstern O (1976). "Some Reflections on Utility". In Andrew Schotter (ed.). Selected Economic Writings of Oskar Morgenstern. New York University Press. pp. 65–70. ISBN 978-0-8147-7771-8.
  • Peirce CS, Jastrow J (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83.
  • Pfanzagl J (1967). "Subjective Probability Derived from the Morgenstern-von Neumann Utility Theory". In Martin Shubik (ed.). Essays in Mathematical Economics In Honor of Oskar Morgenstern. Princeton University Press. pp. 237–251.
  • Pfanzagl J, Baumann V, Huber H (1968). "Events, Utility and Subjective Probability". Theory of Measurement. Wiley. pp. 195–220.
  • Plous S (1993). "Chapter 7 (specifically) and 8, 9, 10, (to show paradoxes to the theory)". The psychology of judgment and decision making.
  • Ramsey RP (1931). "Chapter VII: Truth and Probability" (PDF). The Foundations of Mathematics and other Logical Essays.
  • Schoemaker PJ (1982). "The Expected Utility Model: Its Variants, Purposes, Evidence and Limitations". Journal of Economic Literature. 20: 529–563.
  • Davidson D, Suppes P, Siegel S (1957). Decision-Making: An Experimental Approach. Stanford University Press.
  • Aase KK (2001). "On the St. Petersburg Paradox". Scandinavian Actuarial Journal (1): 69–78.
  • Briggs RA (2019). "Normative Theories of Rational Choice: Expected Utility". In Zalta EN (ed.). The Stanford Encyclopedia of Philosophy.
  • Hacking I (1980). "Strange Expectations". Philosophy of Science. 47: 562–567.
  • Peters O (2011) [1956]. "The time resolution of the St Petersburg paradox". Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical, and Engineering Sciences. 369: 4913–4931.
  • Schoemaker PJ (1980). "Experiments on Decisions under Risk: The Expected Utility Hypothesis.". Experiments on Decisions under Risk.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.