Pros and cons of utility theory

In EU theory, the shape of the utility function determines risk attitudes. For example, for x > 0, if u(x)=xb, then the person should be risk-averse if b < 1, and risk-seeking if b > 1. However, many people are both risk-seeking, when p is small, and risk-averse, when p is moderate to large. Furthermore, many people show risk-aversion for small probabilities of heavy losses (they buy insurance) and they accept risks to avoid certain or highly probable losses.

Whereas Allais considered paradoxical choices ‘rational’, and theory to be wrong, Savage considered paradoxical choices to be human ‘errors’ that should be corrected by theory. Many psychologists considered the contradiction between theory and behavior to mean that descriptive theory need not be rational. In this purely descriptive approach, a choice paradox is merely a clear contradiction between theory and human behavior.

Paradoxical risk attitudes and the Allais paradoxes can be described by a theory in which decision weights are a function of probabilities (Edwards 1954, Kahneman and Tversky 1979). Prospect theory (Kahneman and Tversky 1979) described many of the empirical phenomena known by the 1970s. However, this theory was restricted to gambles with no more than two non-zero payoffs and it included a number of seemingly ad hoc editing rules to avoid implications that were considered both irrational and empirically wrong.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B008043076700632X

Decision Theory: Bayesian

G. Parmigiani, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Axiomatic Foundations

Axiomatic expected utility theory has been concerned with identifying axioms in terms of preferences among actions, that are satisfied if and only if one's behavior is consistent with expected utility, thus providing a foundation to the use of the Bayes action. The fundamental axiom system is that of Savage (1954). A similar and relatively simple formulation, that provides insights into the most important assumptions of expected utility for statistical purposes, was proposed by Anscombe and Aumann (1963). Critical discussions and analytical details are found in Kreps (1988), whose formulation is followed here, and Schervish (1995).

For simplicity, assume that Θ and Zare finite: nΘ is the number of elements in Θ. A simple action is a function a from Θ to Z. Although interest is often in simple actions, Anscombe and Aumann build their theory in terms of more general objects that allow for a state of the world θ to determine the consequence of an action only up to a known probability distribution. In their theory an action is a function a, such that a (θ,·) is a known probability distribution on Z. The boldface type distinguishes these general actions from simple action a(θ)=z, corresponding to the special case in which, for every θ, there is a z such that a(θ, z)=1.

The theory requires a rich set of possible actions. Specifically, for any two a and a′ from A, the composite action αa+(1−α)a′, given pointwise by the convex combinations αa(θ, z)+(1−α)a′(θ, z), with α∈[0,1] is also in A. Also, the state θ is said to be null if the DM is indifferent between any two actions that differ only in what happens if θ occurs.

Choosing actions according to the principle of expected utility is equivalent to holding preferences among actions that satisfy the five axioms of Table 1.

Table 1. Axioms of the Anscombe–Aumann expected utility theory

1. Weak order≻on Ais a preference relation2. IndependenceIf a≻a′ and α∈(0, 1), then for every a″∈Aαa+(1−α)a″≻αa′+(1−α)a″3. ArchimedeanIf a≻a′≻a″, then there are α, β∈(0, 1) such that αa+(1−α)a″≻a′≻βa+(1−β)a″4.There exist a and a′ in Asuch that a′≻a5. State IndependenceFor all a∈Aand any two distribution p and q on Z If (a(1,·),…, a(θ−1,·), q, a(θ+1,·),…, a(nΘ,·)) ≻(a(1,·),…, a(θ−1,·), q, a(θ+1,·),…, α(nΘ,·)) for some state θ, then for all non-null θ′ (a(1,·),…, a(θ′−1,·), q, a(θ′+1,·),…, a(nΘ,·)) ≻(a(1,·),…, a(θ′−1,·), q, a(θ′+1,·),…, a(nΘ,·))

The weak-ordering axiom requires that any two actions can be compared and in a transitive way. It effectively permits representation of the preferences as one-dimensional. The Archimedean axiom bars the DM from preferring one action to another so strongly that no reversal of preference by creating a composite action that includes one of the two is possible. This axiom carries most of the weight in representing the preference dimension as the real line. The Independence axiom requires that two composite lotteries should be compared solely based on the component that is different. It carries most of the weight in guaranteeing the ‘expected’ in the expected utility principle. Axiom 4 is a structural condition requiring that not all states be null.

The first three axioms guarantee that there is a linear function f:A→ℛ that represents the preference relation≻. It also follows that the first three axioms are necessary and sufficient for the existence of real valued functions u1,…, uk, such that:

(6)a≻a'⇔∑θ∑zuθ(z)a(θ,z)>∑θ∑zuθ(z)a'(θ,z).

In this representation, the utility of consequences is specific to the state of the world. Also, each utility is defined up to a state-specific change of scale. This representation is consistent with expected utility maximization, but does not provide a unique decomposition of preferences into probability of states of the world and utility of consequences. Such decomposition is achieved by the State Independence axiom, which asks the DM to consider two comparisons: in the first the two actions are identical except that in state θ one has probability distribution over consequences p and the other q. In the second the two actions are again identical, except that now in state θ′ one has probability distribution p and the other q. If the DM prefers the action with probability distribution p in the first comparison, this preference will have to hold in the second comparison as well. So the preference for p over q is independent of the state. This axiom guarantees separation of utility and probability. The main theorem about representation of preferences via utilities and probabilities is then:

Axioms 1–5 are necessary and sufficient for the existence of a nonconstant function u:Z→Rand a probability distribution π on Θ such that

(7)a≻a'⇔∑θπ(θ)∑zu(z)a(θ,z)>∑θπ(θ)∑zu(z)a'(θ,z).

Moreover, the probability distribution π is unique, and u is unique up to a positive linear transformation.

In particular, for simple actions the summation over z drops out and the expectations above correspond to (a) and (a′).

Axiomatic systems such as Savage's or Anscombe and Aumann's provide the logical foundation for decision making using expected utility in both decision analysis and statistical decision theory. The appropriateness and generality of these axiom systems are at the center of a debate which is critical for statistical practice. Kadane et al. (1999) investigate the implications and limitations of various axiomatizations, and suggest possible generalizations.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767004034

Software Piracy

A. Graham Peace, in Encyclopedia of Information Systems, 2003

X.B. Deterrence Measures

The earlier EUT discussion of expected costs with respect to punishment levels and probability of punishment is closely linked to deterrence theory. The punishment probability factor and the punishment level factor described above are referred to in the deterrence theory literature as punishment certainty and punishment severity, respectively. As with EUT, deterrence theory proposes that, as these factors are increased, the level of illegal behavior should decrease. The unwanted behavior can be deterred through the threat of punishment. Many crimes against property are related to the expected gains of the crime versus the expected costs at the margin. The perceived low probability of being caught may be a major reason in the decision to pirate software. The legal system, in most countries, is founded on the concept described above.

Punishment, such as fines or prison sentences, is heralded as a deterrent to unwanted behavior. This is also true in the software piracy arena. As discussed earlier, many legal mechanisms now exist to punish software pirates. Clearly, the goal is to deter pirates through the threat of both financial and physical punishment. However, these mechanisms are only useful if actually enforced. While pirates are regularly prosecuted and punished in the United States, this is not the case globally. The failure to enforce copyright laws has been a source of conflict in trade negotiations between the U.S. and countries such as China, where piracy is rampant and enforcement of international antipiracy agreements is minimal. Because the vast majority of software is produced in the United States, there is little incentive in other countries for the governments to enforce laws that are aimed at protecting foreign interests while harming local businesses. As stated in Section V, there is evidence that the development and enforcement of software piracy legislation is directly related to the existence of a domestic software industry. Those countries that have a domestic software industry are more likely to develop and enforce legislation designed to protect intellectual property rights.

Numerous examples of legal punishments are being handed out in many countries. In the U.S., the first software pirate convicted using the 1997 NET Act (described in Section V) was sentenced in 1999 to 2 years probation for using the Internet to pirate software. He could have been sentenced to up to 3 years in prison and a maximum fine of $250,000. Similarly, a Virginia man faces up to 1 year in prison, after pleading guilty to setting up a web site that made illegally copied software available for easy downloading. The BSA has collected more than $47 million in damages in the United States in the past 7 years, including a judgment of $80,000 against the city of Issaquah, Washington, which admitted to using unlicensed software on its computers. In Europe, there is a direct correlation between the adoption of the EU Computer Programs Directive and the decreasing level of software piracy, indicating that deterrence measures have been an effective tool in combating piracy.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001623

Risk: Theories of Decision and Choice

M. Weber, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.4 Relative Risk Aversion

All the results in this section are based on expected utility theory. However, it should be pointed out that utility functions not only measure decision-makers' risk attitude, simultaneously they measure the marginal value of the good to be evaluated. An example should illustrate this point.

Suppose one has to evaluate a lottery which gives eight oranges if on a coin flip head comes up and zero oranges if tail comes up. Assume that the certainty equivalent is three oranges, i.e., the risk premium is one orange and thus the decision-maker is considered risk averse. Now, it could well be that this positive risk premium has nothing to do with any ‘intuitive’ notion of risk aversion. For instance, if the decision-maker likes three oranges half as much as eight oranges, then the average value of the lottery is equal to the value of three oranges, i.e., the risk premium can equally well be explained by some strength of preference considerations. The notion of relative risk aversion, introduced by Dyer and Sarin (1982), helps to disentangle strength of preference from utility. The concept of relative risk aversion defines risk aversion relative to the strength of preference. The above example shows a decision-maker who is relatively risk neutral.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006355

Decision Making, Psychology of

J. van der Pligt, in International Encyclopedia of the Social & Behavioral Sciences, 2001

4.1 Prospect Theory

Kahnemann and Tversky (1979) developed prospect theory to remedy the descriptive failures of SEU theories of decision making. Prospect theory attempts to describe and explain decisions under uncertainty. Like SEU theories prospect theory assumes that the value of an option or alternative is calculated as the summed products over specified outcomes. Each product consists of a utility and a weight attached to the objective probability. Both the value function and the probability weighting function are nonlinear. The two functions are not given in closed mathematical form but have a number of important features. The most important feature of the probability weighting function is that small probabilities are overweighted, and large probabilities are underweighted. The probability weighting function is generally not well behaved near the end-points. Extremely low probability outcomes can be exaggerated or ignored entirely. Similarly, small differences between high probability and certainty are sometimes neglected, sometimes accentuated. According to Kahneman and Tversky this is so because people find it difficult to comprehend and evaluate extreme probabilities.

The value function is defined in terms of gains and losses relative to a psychologically neutral reference point. The value function is S-shaped; concave in the region of gains above the reference point, convex in the region of losses (see Fig. 1). Thus, each unit increase in gain (loss) has decreasing value as gain (loss) increases. In other words, the subjective difference between gaining nothing and gaining $100 is greater than the difference between gaining $100 and gaining $200. Finally, the value function is steeper for losses than for gains. This implies that losing $100 is more unpleasant than gaining $100 is pleasant.

Pros and cons of utility theory

Figure 1. Prospect theory: hypothetical value function

Prospect theory can help to explain both the violation of the sure thing principle and some of the framing effects discussed above. Thus, for respondents who won the first gamble in the Tversky and Shafir (1992) study, the average of the value of $100 and $400 may well be greater than the value of the $200 they already won. After losing, the negative value for −$100 is less than the average of −$200 and +$100. When people do not know whether they won or lost they will compare the possible outcomes with the zero-point, and in that case the gamble is not very attractive. Prospect theory can also help to explain the framing effects discussed above. The probability function and the value function will confirm the higher attractiveness of the risky option (in case of losses) and the risk-avoiding option (in case of gains).

Most other descriptive approaches assume that people rely on a variety of strategies or heuristics for solving decision problems. Experience will affect the availability of these strategies, and strategy choice will also be affected by the expected advantages (benefits) and disadvantages (costs) of the chosen strategy. For many decisions an exhaustive analysis such as prescribed by SEU-theory simply is not worth the trouble. Thus, for many problems people aim for an acceptable solution and not necessarily the optimal solution due to the costs (time, effort) of ‘calculating’ the best possible option.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767017502

Utility and Subjective Probability: Empirical Studies

B.A. Mellers, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Prospect Theory

In a now classic paper on risky choice, Kahneman and Tversky (1979) proposed an alternative to expected utility theory called prospect theory. Rational rules were replaced with psychological principles. The value function in prospect theory applied to changes in wealth, not total wealth. Furthermore, changes had diminishing marginal value, such that a change from $0 to $10 had greater impact than an identical change from $1000 to $1010. Finally, negative changes had greater impact than positive changes of equal magnitude, an asymmetry known as loss aversion.

Prospect theory also made psychological assumptions about decision weights. Decision weights were nonlinearly related to objective probabilities; weights for small probabilities were larger than objective probabilities, and weights for large probabilities were smaller. In addition, weights at the endpoints were discontinuous. An event with no chance of occurring was psychologically different from an event with a one percent chance of occurrence (e.g., contracting a horrible disease), and a sure thing was psychologically different from an event with a 99 percent chance of occurring (e.g., winning a million dollars in the lottery).

Prospect theory accounted for a variety of empirical phenomena. One example was the reflection effect, which suggested that risk attitudes varied around the status quo. Consider a choice between a gamble with an 80 percent chance to win $4,000 and a sure win of $3,000. Most people prefer the sure thing. Now consider a choice between a gamble with an 80 percent chance to lose $4,000 and a sure loss of $3,000. Most people prefer the gamble. Although expected utility theory assumed that risk attitudes were constant across all levels of wealth, prospect theory asserted that the shape of the value function differed in the gain and loss domains. Preferences were risk averse in the gain domain and risk seeking in the loss domain. Despite its success, prospect theory had a major drawback. It predicted that people would violate stochastic dominance. Stochastic dominance implies that a decision maker will select the alternative that is as good or better in all respects than other alternatives. Prospect theory predicted that, in some cases, people would choose the inferior option. Despite evidence that people violate stochastic dominance, there was a strong desire to find a general representation that served both normative and descriptive needs. Interest turned to cumulative rank-dependent theories.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006392

Rational Choice and Some Difficulties for Consequentialism*

Paul Anand, in Philosophy of Economics, 2012

1 Introduction

Rational choice constitutes a field in which philosophers share interests particularly with economists but also with political theorists, sociologists, psychologists. Theory, over the past century, has been dominated by the axiomatic approach which offers a distinctive contribution to our understanding of deliberation and choice and one that contains some heavy-weight theories that have been highly influential. Within both the programme of research and its intellectual disaspora, two theories stand out as demanding our attention — expected utility theory, previously the main workhorse for the analysis of decision-making and Arrow's impossibility theorem — a theorem that suggests reasonable social choice mechanisms are destined to being undemocratic. Both theories are axiomatic and share the fact that they were taken to identify reasonable behaviour in individual or group settings.

However, our ideas about both theories have changed since they first emerged and they continue to evolve in ways that are quite dramatic. So far as utility theory is concerned, there is widespread acceptance that subjective expected utility theory is false in significant respects, growing recognition that it is technically possible to construct more general theories and acceptance that rationality does not require transitivity (a view I have argued for and which Rabinowicz [2000] calls ‘the modern view’1). The issues surrounding Arrow's Theorem are slightly different but it can similarly be seen as a theory that heralded a body of research that is coming to take radically different perspectives on concepts like democracy, social choice and the nature of human welfare. Of course there are differences between the contributions made by expected utility to decision theory, and by Arrow's impossibility theorem to social choice, but there are some common issues that arise from the fact that both are axiomatic approaches to decision science and it some of these themes that I wish to focus on in this chapter.

Specifically, I want to argue that whilst axiomatic arguments concerning the nature of rational and social choice are important, intellectually impressive and even aesthetically pleasing, they are also prone to certain weaknesses. These weaknesses are often logical or methodological but they are also intimately related to the doctrine of consequentialism which I view as being poorly designed for picking out some key structures and intuitions to do with reasonable choice in individual or social settings (even if it is well suited to do this for some issues). This chapter is therefore in some ways a potted summary of a theme which I have been exploring for some time and which I hope will help readers come to grips with some of the transformations that have gone on in the field over the past 25 years.

To this end, I shall review the interpretation and justification of three assumptions (transitivity, independence and non-dictatorship) that sit at the heart of the theories mentioned above. In each case, I try to offer a critical assessment of each assumption, on its own terms and show that neither transitivity nor independence should be taken as canons of rational choice and that Arrow's characterisation of dictatorship is questionable in a number of respects. I shall not be arguing against the use of axiomatic method or even against the view that both theories under consideration are not the intellectual giants worthy of our attention that they are widely taken to be. Rather my conclusion will be that whilst we need and benefit from such theories, their ‘take-home message’ is rarely as decisive as the formal representation theorems might lead one to suppose. The rest of the chapter is structured as follows. Sections 2 and 3 provide an overview of arguments why rational agents might wish to violate the transitivity and independence axioms respectively whilst section 4 focuses on some difficulties with Arrow's characterisation of dictatorship. Section 5 discusses the identification problem in the context of the transitivity assumption (common to both theories though arguably in a way that is most relevant to its application in decision theory) whilst section 6 provides a short summary that offers some thoughts about the consequences for future research. I focus on transitivity, not simply because its simplicity belies the existence of some unsuspected difficulties but also because it is a shared cornerstone of decision theory and social choice under the assumption of consequentialism. A theme that I hope will begin to emerge is that the reasons for rejecting particular axioms, in philosophical terms at least, are often related to a common concern about consequentialism's lack of comprehensiveness.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516763500178

Optimal solutions for optimization in practice

Daryl Roxburgh, ... Tim Matthews, in Optimizing Optimization, 2010

3.6.2 Discussion

Von Neumann and Morgenstern (1944, hereafter vNM) define utility as a function U(W) over an investor’s wealth W. To make use of this decision theoretic foundation in portfolio construction, Markowitz approximated the vNM expected utility theory by his well-known MV utility function from Levy and Markowitz (1979). This is justified for normal return distributions or quadratic utility functions. However, as demonstrated by Mandelbrot (1963), the assumption of normal return distributions does not hold for many assets. Also, many investors would not describe their perception of risk through variance. They relate risk rather to “bad outcomes” than to a symmetrical dispersion around a mean. Sharpe (1964) shows that even Markowitz suggested a preference of semivariance over variance due to computational constraints at the time.

One attempt, Jorion (2006), to describe risk more suitably was the use of VaR. While focusing on the downside of a return distribution and considering nonnormal distributions, it comes with other shortcomings. VaR pays no attention to the loss magnitude beyond its value, implying indifference of an investor with regard to losing the VaR or a possibly considerably bigger loss, and it is complicated to optimize VaR. Using VaR for nonnormal distributions can cause the combination of two assets to have a higher VaR value than the sum of both assets’ VaR, i.e., VaR lacks subadditivity. Subadditivity is one of four properties for measures of risk that Artzner, Delbaen, Eber, and Heath (1999) classify as desirable. Risk measures satisfying all four are then called coherent.

Contrary to VaR, lower partial moments are coherent. Fishburn (1977) introduced these and used them as the risk component in his utility function. Here, an investor would formulate his or her utility relative to target wealth, calling for sign dependence. Final wealth above the target wealth has a linearly increasing impact on utility, while outcomes below the target wealth decrease utility exponentially.

Closely related is the expected utility function Kahneman and Tversky (1979) proposed under another descriptive model of choice under uncertainty, as an alternative to vNM. They provide strong experimental evidence for the phenomenon that an investor’s utility is rather driven by the impact of expected gains and losses than by expected return and variance. Considering expected gains and losses (i.e., expected returns relative to a target) is part of their prospect theory framework. The theory also includes that an investor is actually concerned about changes in wealth (and not about the absolute level of wealth) and that he or she experiences a higher sensitivity to losses than to gains. The latter is expressed by a factor (>1), which addresses the loss aversion within a prospect utility function.

Hence, it appears that GLO is more amenable to modeling individual utility and is valid for arbitrary distributions; however, it is not set up for using the great deal of valuable information that active managers have accumulated over years of running successful funds. This information is usually in the form of a history of stock or asset alphas and a history of risk model information. Whilst it is possible to use this to improve GLO, it is obvious that gain and loss is a different reward and risk paradigm from the more traditional mean and variance.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123749529000038

Decision Making: Nonrational Theories

G. Gigerenzer, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Optimizing vs. Nonoptimizing Theories

Rational theories rest on the ideal of optimization; nonrational theories do not. Optimization means the calculation of the maximum (or minimum) of some variable across a number of alternatives or values. For instance, according to a rational theory known as subjective expected utility (SEU) theory, an agent should choose between alternatives (e.g., houses, spouses) by determining all possible consequences of selecting each alternative, estimating the subjective probability and the utility of each consequence, multiplying the probability by the utility, and summing the resulting terms to obtain that alternative's subjective expected utility. Once this computation has been performed for each alternative, the agent chooses the alternative with the highest expected utility. This ‘subjective’ interpretation of SEU has been used to instruct people in making rational choices, but was criticized by decision theorists who argue that preferences are not derived from utilities, but utilities from preferences. In this ‘behavioristic’ interpretation, no claims are made about the existence of utilities in human minds; SEU is only an attempt to describe the choice. People choose as if they are maximizing SEU (see Sect. 3).

Nonrational theories dispense with the ideal of optimization. For instance, Simon (e.g., 1956, 1982) proposed a nonrational theory known as satisficing, in which an agent is characterized by an aspiration level and chooses the first alternative that meets or exceeds this aspiration level. The aspiration level (e.g., characterization of what would constitute a ‘good-enough’ house) allows the agent to make a decision without evaluating all the alternatives.

There are several motives for abandoning the ideal of optimization. First, in many real-world situations, no optimizing strategy is known. Even in a game such as chess, which has only a few stable, well-defined rules, no optimal strategy exists that can be computed by a human or a machine. Second, even when an optimizing strategy exists, it may demand unrealistic amounts of knowledge about alternatives and consequences, particularly when the problem is novel and time is scarce. Acquiring the requisite knowledge can conflict with goals such as making a decision quickly; in situations of immediate danger, attempting to optimize can even be deadly. In social and political situations, making a decision can be more important than searching for the best option. Third, strategies that do not involve optimization can sometimes outperform strategies that attempt to optimize. In other words, the concept of an optimizing strategy needs to be distinguished from the concept of an optimal outcome. In the real world, there is no guarantee that optimization will result in the optimal outcome. One reason is that optimization models are built on simplifying assumptions that may or may not actually hold. An example of a nonoptimizing strategy that performs well in the repeated prisoner's dilemma is ‘tit for tat,’ a simple heuristic that cooperates on the first move and thereafter relies on imitation, cooperating if the partner cooperated and defecting if the partner defected on the previous move. In the finitely repeated prisoner's dilemma, two tit-for-tat players can make more money than two rational players (who reason by ‘backward induction’ and therefore always defect), although tit for tat only requires remembering the partner's last move and does not involve optimization.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767016120

Rationality in Design

Peter Kroes, ... Louis Bucciarelli, in Philosophy of Technology and Engineering Sciences, 2009

5.1 Rational-choice theory

It was already mentioned that engineering-design problems are overwhelmingly end-focused, since the basic goal is to arrive at the blueprint or prototype of an artifact that meets certain requirements. This matches the instrumental conception of rationality. The articulation of theories of instrumental rationality has taken at least two forms. One is the theory of means-ends reasoning, the other, perhaps the better known one, is the theory of rational choice, which will be discussed first. A discussion of means-ends reasoning and a comparison of the two approaches will follow this discussion.

The theory of rational choice has mainly been developed as a core theory for the science of economics, but finds wide application also in areas like operations research, risk analysis, organizational theory, and so forth. Apart from rational-choice theory, the name expected-utility theory is also much in use. This theory models the problem of deciding upon a course of action as a problem of choosing the best among a set of given possible courses of action or options for choice. Where these options come from is not something that the theory of rational choice has anything to say about. To be applicable, the theory simply requires a set of options to be defined. In order to be able to choose the best option, the consequences or outcomes of each option have to be listed and valued. The preference order or preference measure on the set of possible outcomes exhausts the articulation of the decision maker's ends or desires; more preferred outcomes can be taken to be ‘closer’, to his or her ends.

Once a list of all possible outcomes is available, what determines what is the best option to choose, and therefore the rational choice, is how these possible outcomes are evaluated relative to one another. What is minimally required is a preference order of all possible outcomes. In order for a preference order of outcomes to exist, the binary relation of comparison between two outcomes — if I had to choose between two particular outcomes, which one would I prefer — must be complete and transitive. Transitivity is the property that when outcome a is judged to be superior to outcome b and outcome b superior to outcome c, then in a direct comparison of outcomes a and c, outcome a is considered better than or preferable to outcome c. Completeness means that for any two outcomes it must be clear which of the two is better or whether they are perhaps equally good. The latter possibility is called indifference, which has to be sharply distinguished from the case where two outcomes are declared incomparable. With indifference the outcomes are considered equally good or bad and accordingly it is considered irrelevant which of the two options is chosen, and how this choice is made, as long as some option is chosen. In the case of incomparability, a person feels it does matter which of the two options is chosen but is, perhaps only for the time being, unable to fix his or her preference.

A preference order of outcomes suffices to solve a problem of decision making under certainty, i.e. a problem where a choice for an option leads with certainty to one particular outcome, the situation where the option has been realized. Since options relate one-one to outcomes, the rational choice is to choose the option that results in the best or most preferred outcome. This will not do for decision making under uncertainty or risk, however. In general, decision making under uncertainty or risk is a situation where choosing an option can lead to several mutually exclusive outcomes and the decision maker cannot know beforehand which of these possible outcomes will in fact be the result of his or her choice. The distinction between the two forms is that in a case of decision making under risk the probabilities of the realization of the various possible outcomes can be assumed to be known or given, whereas with decision making under uncertainty this is not so. The approach to decision making under risk is considered to be the major contribution of expected-utility theory.

Prima facie engineering design seems not to be the place for its application. The design options that engineers must choose between do not lead to a particular design only in a percentage of all cases, when in fact chosen. However, a particular design, once manufactured, may well perform as intended only in a percentage of all cases. This is the more likely while still in the early phases of a design task, when only prototypes are available. Thus, there will be opportunity for the application of models for decision making under risk in the early phases of design, in order to estimate the relative worth of further developing suggestions for possible design solutions, given that it is uncertain whether they will indeed lead to feasible solutions within a given time and with limited resources to spend. Additionally, there will be opportunity for its application in assessing various design solutions with respect to possible future failures, which can never be ruled out completely.

What are the main criticisms of utility theory?

From its earliest days, expected utility theory met several criticisms. Some were based on a priori arguments that its underlying assumptions were unreasonable, some were based on experimental or empirical evidence that behavior did not conform to its predictions, and some combined the two lines of criticism.

What are the advantages of utility in economics?

Economists use utility function to better understand consumer behaviors, as well as determine how well goods and services provide satisfaction to consumers. Utility function can also help analysts determine how to distribute goods and services to consumers in a way that total utility is realized.

What is the purpose of utility theory?

In economics, utility theory tries to explain the behavior of individual consumers in an economy. Utility theory argues that each person, given a list of options, can rank those options in a precise order of preference. Each person has different choices which are set, not changing over time.

What are the weakness of cardinal utility theory?

It is not measurable in real terms as it is difficult to give a value to a level of satisfaction one gets. Marginal utility is not additive. It makes unrealistic assumptions which do not usually apply in reality.