A dilemma (Greek δί-λημμα "double proposition") is a problem offering two solutions or possibilities, of which neither is acceptable. The two options are often described as the horns of a dilemma, neither of which is comfortable. Some of the best known dilemmas are "Euthyphro dilemma" by Plato and "Prisoner's dilemma." When a problem offers three solutions or possibilities, it is called Trilemma.
The dilemma is sometimes used as a rhetorical device, in the form "you must accept either A, or B;" here A and B would be propositions, each leading to some further conclusion. Applied in this way, it may be a fallacy or a false dichotomy.
In formal logic, the definition of a dilemma differs markedly from everyday usage. Two options are still present, but choosing between them is immaterial because they both imply the same conclusion. Symbolically expressed thus:
<math>A \vee B, A \Rightarrow C, B \Rightarrow C \vdash C</math>
This can be translated informally as "one (or both) of A or B is known to be true, but they both imply C, so regardless of the truth values of A and B we can conclude C."
Horned dilemmas can present more than two choices. The number of choices of Horned dilemmas can be used in their alternative names, such as two-pronged (two-horned) or dilemma proper, or three-pronged (three-horned) or trilemma, and so on.
The Euthyphro dilemma is found in Plato's dialogue Euthyphro, in which Socrates asks Euthyphro: "Is the pious (τὸ ὅσιον) loved by the gods because it is pious, or is it pious because it is loved by the gods" (10a).
In monotheistic terms, this is usually transformed into: "Is what is moral commanded by God because it is moral, or is it moral because it is commanded by God?" The dilemma has continued to present a problem for theists since Plato presented it, and is still the object of theological and philosophical debate.
In game theory, the prisoner's dilemma (sometimes abbreviated PD) is a type of non-zero-sum game in which two players may each "cooperate" with or "defect" (that is, betray) the other player. In this game, as in all game theory, the only concern of each individual player ("prisoner") is maximizing his/her own payoff, without any concern for the other player's payoff. The unique equilibrium for this game is a Pareto-suboptimal solution—that is, rational choice leads the two players to both play defect even though each player's individual reward would be greater if they both played cooperate. In equilibrium, each prisoner chooses to defect even though both would be better off by cooperating, hence the dilemma.
In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible equilibrium for the game is for all players to defect. In simpler terms, no matter what the other player does, one player will always gain a greater payoff by playing defect. Since in any situation, playing defect is more beneficial than cooperating, all rational players will play defect, all things being equal.
In the iterated prisoner's dilemma, the game is played repeatedly. Thus, each player has an opportunity to "punish" the other player for previous non-cooperative play. Cooperation may then arise as an equilibrium outcome. The incentive to defect is overcome by the threat of punishment, leading to the possibility of a cooperative outcome. So, if the game is infinitely repeated, cooperation may be a subgame perfect Nash equilibrium, although both players defecting always remains an equilibrium and there are many other equilibrium outcomes.
The Prisoner's Dilemma was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence payoffs and gave it the "Prisoner's Dilemma" name (Poundstone, 1992).
The classical prisoner's dilemma (PD) is as follows:
The dilemma can be summarized thus:
|Prisoner B Stays Silent||Prisoner B Betrays|
|Prisoner A Stays Silent||Each serves six months||Prisoner A serves ten years
Prisoner B goes free
|Prisoner A Betrays||Prisoner A goes free
Prisoner B serves ten years
|Each serves five years|
The dilemma arises when one assumes that both prisoners only care about minimizing their own jail terms. Each prisoner has two and only two options: Either to cooperate with his accomplice and stay quiet, or to defect from their implied pact and betray his accomplice in return for a lighter sentence. The outcome of each choice depends on the choice of the accomplice, but each prisoner must choose without knowing what his accomplice has chosen.
In deciding what to do in strategic situations, it is normally important to predict what others will do. This is not the case here. If one prisoner knows the other prisoner would stay silent, the first's best move is to betray, as he then walks free instead of receiving the minor sentence. If one knew the other prisoner would betray, the best move is still to betray, as one would receive a lesser sentence than by silence. Betraying is a dominant strategy. The other prisoner reasons similarly, and therefore also chooses to betray. Yet, by both defecting they get a lower payoff than they would get by staying silent. So rational, self-interested play results in each prisoner being worse off than if they had stayed silent. In more technical language, this demonstrates very elegantly that in a non-zero sum game a Nash Equilibrium need not be a Pareto optimum.
Note that the paradox of the situation lies in that the prisoners are not defecting in hope that the other will not. Even when they both know the other to be rational and selfish, they will both play defect. Defect is what they will play no matter what, even though they know fully well that the other player is playing defect as well and that they will both be better off with a different result.
The "Stay Silent" and "Betray" strategies are also known as "don't confess" and "confess," or the more standard "cooperate" and "defect."
One experiment based on the simple dilemma found that approximately 40 percent of participants cooperated (that is, stayed silent).
The phrase hedgehog's dilemma refers to the notion that the closer two beings come to each other, the more likely they are to hurt one another; however if they remain apart, they will each feel the pain of loneliness. This comes from the idea that hedgehogs, with sharp spines on their backs, will hurt each other if they get too close. This is analogous to a relationship between two human beings. If two people come to care about and trust each other, something bad that happens to one of them will hurt the other as well, and dishonesty between the two could cause even greater problems.
The concept originates from Arthur Schopenhauer's Parerga und Paralipomena, Volume II, Chapter XXXI, Section 396. In his English translation, E.F.J. Payne translates the German "Stachelschweine" as "porcupines." Schopenhauer's parable describes a number of hedgehogs who need to huddle together for warmth and who struggle to find the distance where they are warm without hurting one another. The hedgehogs have to sacrifice warmth for comfort. The conclusion that Schopenhauer draws is that if someone has enough internal warmth, he or she can avoid society and the giving and receiving of irritation that results from social interaction.
It is also important to note that hedgehogs do not actually hurt each other when they get close; human beings tend to keep themselves more "on guard" in relationships and are more likely to sting one another in the way that a relaxed hedgehog would if spooked. When living in groups, hedgehogs often sleep close to each other.
In the platonia dilemma introduced in Douglas Hofstadter's book Metamagical Themas, an eccentric trillionaire gathers 20 people together, and tells them that if one and only one of them sends him a telegram (reverse charges) by noon the next day, that person will receive a billion dollars. If he receives more than one telegram, or none at all, no one will get any money, and cooperation between players is forbidden. In this situation, the superrational thing to do is to send a telegram with probability 1/20.
A similar game, referred to as a "Luring Lottery," was actually played by the editors of Scientific American in the 1980s. To enter the contest once, readers had to send in a postcard with the number "1" written on it. They were also explicitly permitted to submit as many entries as they wished by sending in a single postcard bearing the number of entries they wished to submit. The prize was one million dollars divided by the total number of entries received, to be awarded to the submitter of a randomly chosen entry. Thus, a reader who submitted a large number of entries increased his or her chances of winning but reduced the maximum possible value of the prize.
According to the magazine, the rational thing was for each contestant to roll a simulated die with the number of sides equal to the number of expected responders (about 5 percent of the readership), and then send "1" if the player rolls "1." If all contestants had followed this strategy, it is likely that the magazine would have received a single postcard, with a "1," and would have had to pay a million dollars to the sender of that postcard. Reputedly the publisher and owners were very concerned about betting the company on a game.
Although the magazine had previously discussed the concept of superrationality from which the above-mentioned algorithm can be deduced, many of the contestants submitted entries consisting of an astronomically large number (including several who entered a googolplex). Some took this game further by filling their postcards with mathematical expressions designed to evaluate to the largest possible number in the limited space allowed. The magazine was unable to tell who won, and the monetary value of the prize would have been a minuscule fraction of a cent.
In international relations, the security dilemma refers to a situation wherein two or more states are drawn into conflict, possibly even war, over security concerns, even though none of the states actually desire conflict. Any attempt a state makes to increase its own security will actually decrease its security.
A frequently cited example of the security dilemma is the beginning of World War I. Supporters of this viewpoint argue that the major European powers felt forced to go to war by feelings of insecurity over the alliances of their neighbors, despite not actually desiring the war. Furthermore, the time necessary to mobilize large amounts of troops for defense led some Great Powers (such as Russia) to adopt a particularly accelerated mobilization timetable, which in turn put pressure on other states to mobilize early as well. However, other scholars dispute this interpretation of the origins of the war, contending that some of the states involved really did want the conflict.
The security dilemma is a popular concept with cognitive and international relations theorists of international relations, who regard war as essentially arising from failures of communication. Functionalist theorists affirm that the key to avoiding war is the avoidance of miscommunication through proper signaling.
The notion of the security dilemma is attributed to John H. Herz, since he used it in the second issue of the second volume of World Politics and the notion is often used in realist theories of international relations which suggest that war is a regular and often inherent condition of life.
Stagflation, a portmanteau of the words stagnation and inflation, is a term in general use within modern macroeconomics used to describe a period of out-of-control price inflation combined with slow-to-no output growth, rising unemployment, and eventually recession. The term stagflation is generally attributed to United Kingdom Chancellor of the Exchequer, Iain MacLeod in a speech to parliament in 1965. "Stag" is drawn from the first syllable of "stagnation," a reference to a sluggish economy, while "flation" is drawn from the second and third syllables of "inflation"—a reference to an upward spiral in consumer prices. Economists associate the presence of both factors as unit costs increase because fixed costs are spread over smaller output.
Stagflation is a problem because the two principal tools for directing the economy, fiscal policy, and monetary policy, offer only trade offs between growth and inflation. A central bank can either slow growth to reduce inflationary pressures, or it can allow general increases in price to occur in order to stimulate growth. Stagflation creates a dilemma in that efforts to correct stagnation only worsen inflation, and vice versa. The dilemma in monetary policy is instructive. The central bank can make one of two choices, each with negative outcomes. First, the bank can choose to stimulate the economy and create jobs by increasing the money supply (by purchasing government debt), but this risks boosting the pace of inflation. The other choice is to pursue a tight monetary policy (reducing government debt purchases in order to raise interest rates) to reduce inflation, at the risk of higher unemployment and slower output growth.
The problem for fiscal policy is far less clear. Both revenues and expenditures tend to rise with inflation, all else equal, while they fall as growth slows. Unless there is a differential impact on either revenues or spending due to stagflation, the impact of stagflation on the budget balance is not altogether clear. As a policy matter, there is one school of thought that the best policy mix is one in which government stimulates growth through increased spending or reduced taxes while the central bank fights inflation through higher interest rates. In reality, coordinating fiscal and monetary policy is not an easy task.
In Zen and the Art of Motorcycle Maintenance, Robert Pirsig outlines possible responses to a dilemma. The classical responses are to either choose one of the two horns and refute the other or alternatively to refute both horns by showing that there are additional choices. Pirsig then mentions three illogical or rhetorical responses. One can "throw sand in the bull's eyes" by, for example, questioning the competence of the questioner. One can "sing the bull to sleep" by, for example, stating that the answer to the question is beyond one's own humble powers and asking the questioner for help. Finally one can "refuse to enter the arena" by, for example, stating that the question is unanswerable.
A trilemma is a difficult choice from three alternatives, each of which is (or appears) unacceptable or unfavorable.
There are two logically equivalent ways in which to express a trilemma: It can be expressed as a choice among three unfavorable options, one of which must be chosen, or as a choice among three favorable options, only two of which are possible at the same time.
The term derives from the much older term dilemma, a choice between two difficult or unfavorable options.
1. If God is willing but unable to prevent evil, he is not omnipotent 2. If God is able but not willing to prevent evil, he is not good 3. If God is willing and able to prevent evil, then why is there evil?
One of the best known trilemmas is one popularised by C. S. Lewis. It proceeds from the assumption that Jesus claimed, either implicitly or explicitly, to be God. Therefore one of the following must be true:
In economics, the trilemma (or "impossible trinity") is a term used in discussing the problems associated with creating a stable international financial system. It refers to the trade-offs among the following three goals: A fixed exchange rate, national independence in monetary policy, and capital mobility. According to the Mundell-Fleming model, a small, open economy cannot achieve all three of these policy goals at the same time: in pursuing any two of these goals, a nation must forgo the third.
Stephen Pinker noted another social trilemma in his book, The Blank Slate, that a society cannot be simultaneously fair, free and equal. If it is fair, individuals who work harder will accumulate more wealth; if it is free, parents will leave the bulk of their inheritance to their children; but then it will not be equal, as people will begin life with different fortunes.
Arthur C. Clarke cited a management trilemma among a product being done quickly, cheaply, and of high quality. In the software industry, this means that one can pick any two of: Fastest time to market, highest software quality (fewest defects), and lowest cost (headcount). This is the basis of the popular project-management aphorism, "Quick, Cheap, Good: Pick two."
In the theory of knowledge the Munchhausen-Trilemma is a philosophical term coined to stress the impossibility to prove any certain truth even in the fields of logic and mathematics. Its name is going back to a logical proof of the German philosopher Hans Albert. This proof runs as follows: All of the only three possible attempts to get a certain justification must fail:
The “Trilemma of the Earth” (or “3E Trilemma”) is a term used by scientists working on energy and environment protection. 3E Trilemma stands for Economy-Energy-Environment interaction.
For the activation of economic development (E: Economy) to occur, we need to increase the energy expenditure (E: Energy) however this raises the environmental issue (E: Environment) of more emissions of pollutant gases.
All links retrieved August 18, 2013.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.