Difference between revisions of "Induction (philosophy)" - New World Encyclopedia

From New World Encyclopedia
(from wiki)
 
 
(16 intermediate revisions by 6 users not shown)
Line 1: Line 1:
{{dablink|Inductive reasoning is the complement of [[deductive]] reasoning. For other article subjects named induction, see [[Induction]].}}
+
{{Paid}}{{Approved}}{{Images OK}}{{Submitted}}{{copyedited}}
__TOC__
+
'''Induction''' is a specific form of reasoning in which the premises of an argument support a conclusion, but do not ensure it. The topic of induction is important in [[analytic philosophy]] for several reasons and is discussed in several philosophical sub-fields, including [[logic]], [[epistemology]], and [[philosophy of science]]. However, the most important philosophical interest in induction lies in the problem of whether induction can be "justified."  This problem is often called "the problem of induction" and was discovered by the Scottish philosopher [[David Hume]] (1711-1776). 
'''Induction''' or '''inductive reasoning''', sometimes called '''inductive logic''', is the process of [[reasoning]] in which the premises of an argument support the conclusion but do not ensure it. It is used to ascribe [[Category of being|properties or relations]] to [[Type (metaphysics)|types]] based on tokens (i.e., on one or a small number of observations or experiences); or to formulate [[law]]s based on limited observations of recurring [[phenomena]]l patterns. Induction is used, for example, in using specific propositions such as:
+
{{toc}}
 +
Therefore, it would be worthwhile to define what philosophers mean by "induction" and to distinguish it from other forms of reasoning. It would also be helpful to present Hume’s problem of induction, Nelson Goodman’s (1906-1998) new riddle of induction, and statistical as well as probabilistic inference as potential solutions to these problems.
  
: This ice is cold.
+
==Enumerative induction==
: A billiard ball moves when struck with a cue.
 
  
...to infer general propositions such as:
+
The sort of induction that philosophers are interested in is known as '''enumerative induction.'''  Enumerative induction (or simply ''induction'') comes in two types, "strong" induction and "weak" induction.
  
: All ice is cold.
+
===Strong induction===
: There is no ice in the Sun.
 
: For every action, there is an equal and opposite reaction.
 
: Anything struck with a cue moves.
 
  
== Strong and weak induction ==
+
Strong induction has the following form:
===Strong induction===
+
 
: All observed crows are black.
+
A<sub>1</sub> is a B<sub>1</sub>.<br/>
: therefore
+
A<sub>2</sub> is a B<sub>2</sub>.<br/>
: All crows are black.''
+
:<br/>
This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we are certain that we have seen every'' crow &ndash; something that is impossible &ndash; there may be one of a different colour. (Being black may be added to the definition of a crow; but if two crow-like birds were to be identical except for their colour, one would become an instance of a black crow and the other a (rare) instance of, say, a blue crow &ndash; but both would still be regarded as crows.)
+
A<sub>n</sub> is a B<sub>n</sub>.<br/>
 +
Therefore, all As are Bs.
 +
 
 +
An example of strong induction is that ''all'' ravens are black because each raven that has ever been observed has been black.
  
 
===Weak induction===
 
===Weak induction===
  
: I always hang pictures on nails.
+
But notice that one need not make such a strong inference with induction because there are two types, the other being weak induction. Weak induction has the following form:
: therefore
 
: All pictures hang from nails.
 
Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalisation that "All pictures hang from nails". However, not all pictures are hung from nails; indeed, not all pictures are hung. Conclusions drawn in this manner are usually overgeneralisations.
 
  
: Teenagers are given many speeding tickets.
+
A<sub>1</sub> is a B<sub>1</sub>.<br/>
: therefore
+
A<sub>2</sub> is a B<sub>2</sub>.<br/>
: All teenagers speed.
+
:<br/>
In this example, the premise is not built upon a certainty: not every teenager observed has been given a speeding ticket. Therefore the conclusion drawn cannot be the certainty it claims to be.
+
A<sub>n</sub> is a B<sub>n</sub>.<br/>
 +
Therefore, the next A will be a B.
  
== Validity ==
+
An example of weak induction is that because every raven that has ever been observed has been black, the ''next'' observed raven will be black.
Formal logic as most people learn it is deductive rather than inductive. Some philosophers claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to [[deductive reasoning]], conclusions arrived at by inductive reasoning do not necessarily have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is false, but may have been thought true in [[Europe]] until the settlement of [[Australia]]. Inductive arguments are never [[validity|binding]] but they may be [[cogency|cogent]]. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed.
 
  
The classic philosophical treatment of the [[problem of induction]], meaning the search for a justification for inductive reasoning, was by the [[Scottish people|Scottish]] philosopher [[David Hume]]. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death.
+
===Mathematical induction===
  
Instead of unproductive [[Philosophical skepticism|radical skepticism]] about everything, Hume advocated a [[Scientific skepticism|practical skepticism]] based on [[common sense]], where the inevitability of induction is accepted.
+
Enumerative induction should not be confused with '''mathematical induction.'''  While enumerative induction concerns matters of empirical fact, mathematical induction concerns matters of mathematical fact.  Specifically, mathematical induction is what [[mathematician]]s use to make claims about an infinite set of mathematical objects.  Mathematical induction is different from enumerative induction because mathematical induction guarantees the truth of its conclusions since it rests on what is called an “inductive definition” (sometimes called a “recursive definition”).
  
Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence &ndash; for instance, as in [[archaeology]], count as induction. Induction could also be across space rather than time, for instance as in [[cosmology]] where conclusions about the whole universe are drawn from what we are able to observe from within our own galaxy; or in [[economics]], where national economic policy is derived from local economic performance.
+
Inductive definitions define sets (usually infinite sets) of mathematical objects.  They consist of a ''base clause'' specifying the basic elements of the set, one or more ''inductive clauses'' specifying how additional elements are generated from existing elements, and a ''final clause'' stipulating that all of the elements in the set are either basic or in the set because of one or more applications of the inductive clause or clauses (Barwise and Etchemendy 2000, 567). For example, the set of natural numbers (N) can be inductively defined as follows:
  
Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. [[Nelson Goodman]] posed a "new riddle of induction" by inventing the property "grue" to which induction does not apply (see [[Grue (color)|Grue]]).
+
1. 0 is an element in N
 +
2. For any element x, if x is an element in N, then (x + 1) is an element in N.
 +
3. Nothing else is an element in N unless it satisfies condition (1) or (2).
  
==Types of inductive reasoning==
+
Thus, in this example, (1) is the base clause, (2) is the inductive clause, and (3) is the final clause.  Now inductive definitions are helpful because, as mentioned before,  mathematical inductions are infallible precisely because they rest on inductive definitions.  Consider the following mathematical induction that proves the sum of the numbers between 0 and a natural number n (S<sub>n</sub>) is such that S<sub>n</sub> = ½n(n + 1), which is a result first proven by the mathematician [[Carl Frederick Gauss]] [1777-1855]:
''Sources for the examples that follow are:'' [http://ethics.acusd.edu/Courses/logic/answers/Exercise1_3.html (1)], [http://www.dartmouth.edu/~bio125/logic.Giere.pdf (2)], [http://www.philosophypages.com/lg/e14.htm (3)].
+
<blockquote>
 +
First, we know that 0 = ½(0)(0 + 1) = 0.  Now assume S<sub>m</sub> = ½m(m + 1) for some natural number m. Then if S<sub>m</sub> + 1 represents S<sub>m</sub> + (m + 1), it follows that S<sub>m</sub> + (m + 1) = ½m(m + 1) + (m + 1). Furthermore, since ½m(m + 1) + (m + 1) = ½m<sup>2</sup> + 1.5m + 1, it follows that ½ m<sup>2</sup> + 1.5m + 1 = (½m + ½)(n + 2). But then, (½m + ½)(n + 2) = ½(m + 1)((n + 1) + 1). Since the first subproof shows that 0 is in the set that satisfies S<sub>n</sub> = ½n(n + 1), and the second subproof shows that for any number that satisfies S<sub>n</sub> = ½n(n + 1), the natural number that is consecutive to it satisfies S<sub>n</sub> = ½n(n + 1), then by the inductive definition of N, N has the same elements as the set that satisfies S<sub>n</sub> = ½n(n + 1). Thus, S<sub>n</sub> = ½n(n + 1) holds for all natural numbers.
 +
</blockquote>
 +
Notice that the above mathematical induction is infallible because it rests on the inductive definition of N. However, unlike mathematical inductions, enumerative inductions are not infallible because they do not rest on inductive definitions.
  
===Generalization===
+
==Non-inductive reasoning==
A generalization (more accurately, an ''inductive generalization'') proceeds from a premise about a [[statistical sample|sample]] to a conclusion about the [[statistical population|population]]:
 
  
: The proportion Q of the sample has attribute A.
+
Induction contrasts with two other important forms of reasoning: '''[[Deduction]]''' and '''abduction'''.  
: therefore
 
: The proportion Q of the population has attribute A.
 
  
How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The [[hasty generalization]] and [[biased sample]] are fallacies related to generalization.
+
===Deduction===
  
===Statistical syllogism===
+
''Deduction'' is a form of reasoning whereby the premises of the argument guarantee the conclusion.  Or, more precisely, in a deductive argument, if the premises are true, then the conclusion is true.  There are several forms of deduction, but the most basic one is ''modus ponens,'' which has the following form:
A statistical syllogism proceeds from a generalization to a conclusion about an individual:
 
  
: A proportion Q of population P has attribute A.
+
If A, then B<br/>
: An individual I is a member of P.
+
A<br/>
: therefore
+
Therefore, B<br/>
: There is a probability which corresponds to Q that I has A.
 
  
The proportion in the first premise would be something like "3/5ths of", "all", "few", etc. Two [[dicto simpliciter]] fallacies can occur in statistical syllogisms: "[[accident (fallacy)|accident]]" and "[[converse accident]]".
+
Deductions are unique because they guarantee the truth of their conclusions if the premises are true. Consider the following example of a deductive argument:  
  
===Simple induction===
+
Either Tim runs track or he plays tennis.<br/> 
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
+
Tim does not play tennis.<br/> 
 +
Therefore, Tim runs track.<br/> 
  
: Proportion Q of the known instances of population P has attribute A.
+
There is no way that the conclusion of this argument can be false if its premises are true. Now consider the following inductive argument:  
: Individual I is another member of P.
 
: therefore
 
: There is a probability corresponding to Q that I has A.
 
  
This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.
+
Every raven that has ever been observed has been black.<br/>
 +
Therefore, all ravens are black.<br/>
  
===Argument from analogy===
+
This argument is deductively invalid because its premises can be true while its conclusion is false. For instance, some ravens could be brown although no one has seen them yet. Thus a feature of induction is that they are deductively invalid.
An (inductive) [[analogy]] proceeds from known similarities between two things to a conclusion about an additional attribute common to both things:
 
 
: P is similar to Q.
 
: P has attribute A.
 
: therefore
 
: Q has attribute A.
 
  
An analogy relies on the inference that the properties known to be shared (the similarities) imply that A is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between P and Q.
+
===Abduction===
  
===Causal inference===
+
Abduction is a form of reasoning whereby an antecedent is inferred from its consequent. The form of abduction is below:
An  inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.
 
  
: A prediction draws a conclusion about a future individual from a past sample.
+
''If A, then B''<br/>
: Proportion Q of observed members of group G have had attribute A.
+
''B''<br/>
: therefore
+
''Therefore, A''<br/>
: There is a probability corresponding to Q that the next observed .
 
  
===Argument from authority===
+
Notice that abduction is deductively invalid as well because the truth of the premises in an abductive argument does not guarantee the truth of their conclusions. For example, even if all dogs have legs, seeing legs does not imply that they belong to a dog.
An argument from authority draws a conclusion about the truth of a statement based on the proportion of true propositions provided by a source. It has the same form as a prediction.
 
  
: Proportion Q of the claims of authority A have been true.
+
Abduction is also distinct from induction, although both forms of reasoning are used amply in everyday as well as scientific reasoning.  While both forms of reasoning do not guarantee the truth of their conclusions, scientists since [[Isaac Newton]] (1643-1727) have believed that induction is a stronger form of reasoning than abduction.  
: therefore
 
: There is a probability corresponding to Q that this claim of A is true.
 
  
For instance:
+
==The problem of induction==
: All observed claims from websites about logic are true.
 
: Information X came from a website about logic.
 
: therefore
 
: Information X is likely to be true.
 
  
== Bayesian inference ==
+
[[David Hume]] questioned whether induction was a strong form of reasoning in his classic text, ''A Treatise of Human Nature''. In this text, Hume argues that induction is an unjustified form of reasoning for the following reason.  One believes inductions are good because nature is uniform in some deep respect. For instance, one induces that all ravens are black from a small sample of black ravens because he believes that there is a regularity of blackness among ravens, which is a particular uniformity in nature.  However, why suppose there is a regularity of blackness among ravens?  What justifies this assumption?  Hume claims that one knows that nature is uniform either deductively or inductively.  However, one admittedly cannot deduce this assumption and an attempt to induce the assumption only makes a justification of induction circular. Thus, induction is an unjustifiable form of reasoning.  This is Hume's problem of induction.
Of the candidate systems for an inductive logic, the most influential is [[Bayesianism]]. This uses [[probability]] theory as the framework for induction. Given new evidence, [[Bayes' theorem]] is used to evaluate how much the strength of a belief in a hypothesis should change.
 
  
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of [[Objectivism (metaphysics)|objectivism]]. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
+
Instead of becoming a skeptic about induction, Hume sought to explain how people make inductions, and considered this explanation as good of a justification of induction that could be made. Hume claimed that one make inductions because of ''habits''. In other words, habit explains why one induces that all ravens are black from seeing nothing but black ravens beforehand.
  
[[Edwin Thompson Jaynes|Edwin Jaynes]], an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or [[priors|prior probabilities]]; or in choosing [[likelihood|likelihoods]]. He thus sought principles for assigning probabilities from qualitative knowledge. [[Maximum entropy]] &ndash; a generalization of the [[principle of indifference]] &ndash; and [[transformation groups]] are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
+
==The new riddle of induction==
  
[[Cox's theorem]], which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an ''inductive logic''.
+
Nelson Goodman (1955) questioned Hume’s solution to the problem of induction in his classic text ''Fact, Fiction, and Forecast''.  Although Goodman thought Hume was an extraordinary philosopher, he believed that Hume made one crucial mistake in identifying habit as what explains induction.  The mistake is that people readily develop habits to make some inductions but not others, even though they are exposed to both observations.  Goodman develops the following ''grue'' example to demonstrate his point:
 +
<blockquote>
 +
Suppose that all observed emeralds have been green.  Then we would readily induce that the next observed emerald would be green.  But why green?  Suppose "grue" is a term that applies to all observed green things or unobserved blue things.  Then all observed emeralds have been grue as well.  Yet none of us would induce that the next observed emerald would be blue even though there would be equivalent evidence for this induction.
 +
</blockquote>
 +
Goodman anticipates the objection that since "grue" is defined in terms of green and blue, green and blue are ''prior'' and more ''fundamental'' categories than grue.  However, Goodman responds by pointing out that the latter is an illusion because green and blue can be defined in terms of grue and another term "bleen," where something is bleen just in case it is observed and blue or unobserved and green.  Then "green" can be defined as something observed and grue or unobserved and bleen, while "blue" can be defined as something observed and bleen or unobserved and grue.  Thus the new riddle of induction is not about what justifies induction, but rather, it is about why people make the inductions they do given that they have ''equal'' evidence to make several incompatible inductions?
  
== Footnotes ==
+
Goodman’s solution to the new riddle of induction is that people make inductions that involve familiar terms like "green," instead of ones that involve unfamiliar terms like "grue," because familiar terms are more ''entrenched'' than unfamiliar terms, which just means that familiar terms have been ''used'' in more inductions in the past.  Thus statements that incorporate entrenched terms are “projectible” and appropriate for use in inductive arguments.
<div style=font-size:90%><references /></div>
 
  
== External links ==
+
Notice that Goodman’s solution is somewhat unsatisfying. While he is correct that some terms are more entrenched than others, he provides no explanation for why unbalanced entrenchment exists. In order to finish Goodman’s project, the philosopher [[Willard Van Orman Quine]] (1956-2000) theorizes that entrenched terms correspond to ''natural kinds''
* [http://www.uncg.edu/phi/phi115/induc4.htm ''Four Varieties of Inductive Argument''] from the Department of Philosophy, [[University of North Carolina at Greensboro]].
+
 
* [http://plato.stanford.edu/entries/logic-inductive/ ''Inductive Logic''] from the [[Stanford Encyclopedia of Philosophy]].
+
Quine (1969) demonstrates his point with the help of a familiar puzzle from the philosopher [[Carl Hempel]] (1905-1997), known as "the ravens paradox:" 
* {{PDFlink}} [http://faculty.ucmerced.edu/eheit/heit2000.pdf ''Properties of Inductive Reasoning''], a psychological review by Evan Heit of the [[University of California, Merced]].
+
<blockquote>
 +
Suppose that observing several black ravens is evidence for the induction that all ravens are black.  Then since the contrapositive of "All ravens are black" is "All non-black things are non-ravens," observing non-black things such as green leafs, brown basketballs, and white baseballs is also evidence for the induction that all ravens are black. But how can this be?
 +
</blockquote>
 +
Quine (1969) argues that observing non-black things is not evidence for the induction that all ravens are black because non-black things do not form a natural kind and projectible terms only refer to natural kinds (e.g. "ravens" refers to ravens).  Thus terms are projectible (and become entrenched) because they refer to natural kinds.
 +
 
 +
Even though this extended solution to the new riddle of induction sounds plausible, several of the terms that we use in natural language do not correspond to natural kinds, yet we still use them in inductions. A typical example from the philosophy of language is the term "game," first used by [[Ludwig Wittgenstein]] (1889-1951) to demonstrate what he called “family resemblances.
 +
 
 +
Look at how competent English speakers use the term "game."  Examples of games are [[Monopoly]], card games, the Olympic games, war games, tic-tac-toe, and so forth. Now, what do all of these games have in common?  Wittgenstein would say, “nothing,” or if there is something they all have in common, that feature is not what ''makes'' them games.  So games resemble each other although they do not form a kind.  Of course, even though games are not natural kinds, people make inductions with the term, "game."  For example, since most Olympic games have been in industrialized cities in the recent past, most Olympic games in the near future should occur in industrialized cities. 
 +
 
 +
Given the difficulty of solving the new riddle of induction, many philosophers have teamed up with mathematicians to investigate mathematical methods for handling induction.  A prime method for handling induction mathematically is statistical inference, which is based on probabilistic reasoning.
 +
 
 +
==Statistical inference==
 +
 
 +
Instead of asking whether all ravens are black because all observed ravens have been black, statisticians ask what is the probability that ravens are black given that an appropriate sample of ravens have been black.  Here is an example of statistical reasoning:
 +
<blockquote>
 +
Suppose that the average stem length out of a sample of 13 soybean plants is 21.3 cm with a standard deviation of 1.22 cm.  Then the probability that the interval (20.6, 22.1) contains the average stem length for all soybean plants is .95 according to Student’s t distribution (Samuels and Witmer 2003, 189).
 +
</blockquote>
 +
Despite the appeal of statistical inference, since it rests on probabilistic reasoning, it is only as valid as probability theory is at handling inductive reasoning.
 +
 
 +
==Probabilistic inference==
  
== See also ==
+
Bayesianism is the most influential interpretation of [[probability]] theory and is an equally influential framework for handling induction.  Given new evidence, "Bayes' theorem" is used to evaluate how much the strength of a belief in a hypothesis should change.
{{col-begin}}
 
{{col-break}}
 
* [[Abductive reasoning]]
 
* [[Deductive reasoning]]
 
* [[Explanation]]
 
* [[Falsifiability]]
 
* [[Inductive reasoning aptitude]]
 
{{col-break}}
 
* [[Inferential statistics]]
 
* [[Inquiry]]
 
* [[Logic]]
 
* [[Mathematical induction]]
 
* [[Retroductive reasoning]]
 
{{col-end}}
 
  
{{Philosophy navigation}}
+
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of [[Objectivism|objectivism]]. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
  
 +
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or "prior probabilities"; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. [[Maximum entropy]] &ndash; a generalization of the [[principle of indifference]] &ndash; and "transformation groups" are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
  
<!--Categories—>
+
"Cox's theorem," which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an ''inductive logic''.  Nevertheless, how well probabilistic inference handles Hume’s original problem of induction as well as Goodman’s new riddle of induction is still a matter debated in contemporary philosophy and presumably will be for years to come.
[[Category:Logic]]
 
[[Category:Epistemology]]
 
[[Category:Problem solving]]
 
  
<!--Other languages—>
+
==References==
 +
*Barwise, Jon and John Etchemendy. 2000. ''Language, Proof and Logic''. Stanford: CSLI Publications.
 +
*Goodman, Nelson. 1955. ''Fact, Fiction, and Forecast''. Cambridge: Harvard University Press.
 +
*Hume, David. 2002. ''A Treatise of Human Nature'' (David F. and Mary J. Norton, eds.). Oxford: Oxford University Press.
 +
*Quine, W.V.O. 1969. ''Ontological Relativity and Other Essays''. New York: Columbia University Press.
 +
*Samuels, Myra and Jeffery A. Witmer. 2003. ''Statistics for the Life Sciences''. Upper Saddle River: Pearson Education.
 +
*Wittgenstein, Ludwig. 2001. ''Philosophical Investigations'' (G.E.M. Anscombe, trans.). Oxford: Blackwell.
  
[[da:Induktion (metode)]]
+
== External links ==
[[de:Induktionsschluss]]
+
All links retrieved March 2, 2018.
[[el:Επαγωγή (Φιλοσοφία)]]
+
*[http://plato.stanford.edu/entries/logic-inductive/ Inductive Logic], Stanford Encyclopedia of Philosophy.
[[eo:Induktiva logiko]]
+
*[http://www.iep.utm.edu/d/ded-ind.htm Deductive and Inductive Arguments], The Internet Encyclopedia of Philosophy.
[[fr:Induction (logique)]]
+
===General philosophy sources===
[[he:אינדוקציה]]
+
*[http://plato.stanford.edu/ Stanford Encyclopedia of Philosophy].
[[nl:Inductie (filosofie)]]
+
*[http://www.iep.utm.edu/ The Internet Encyclopedia of Philosophy].
[[ja:帰納]]
+
*[http://www.bu.edu/wcp/PaidArch.html Paideia Project Online].
[[no:Induksjon (filosofi)]]
+
*[http://www.gutenberg.org/ Project Gutenberg].
[[ru:Индукция (философия)]]
 
[[sl:Indukcija (logika)]]
 
[[sv:Induktion (filosofi)]]
 
[[zh:归纳法]]
 
  
 +
[[Category:Logic]]
 +
[[Category:Epistemology]]
 +
[[Category:Philosophy and religion]]
  
 
{{Credit|52729112}}
 
{{Credit|52729112}}

Latest revision as of 21:06, 2 March 2018

Induction is a specific form of reasoning in which the premises of an argument support a conclusion, but do not ensure it. The topic of induction is important in analytic philosophy for several reasons and is discussed in several philosophical sub-fields, including logic, epistemology, and philosophy of science. However, the most important philosophical interest in induction lies in the problem of whether induction can be "justified." This problem is often called "the problem of induction" and was discovered by the Scottish philosopher David Hume (1711-1776).

Therefore, it would be worthwhile to define what philosophers mean by "induction" and to distinguish it from other forms of reasoning. It would also be helpful to present Hume’s problem of induction, Nelson Goodman’s (1906-1998) new riddle of induction, and statistical as well as probabilistic inference as potential solutions to these problems.

Enumerative induction

The sort of induction that philosophers are interested in is known as enumerative induction. Enumerative induction (or simply induction) comes in two types, "strong" induction and "weak" induction.

Strong induction

Strong induction has the following form:

A1 is a B1.
A2 is a B2.


An is a Bn.
Therefore, all As are Bs.

An example of strong induction is that all ravens are black because each raven that has ever been observed has been black.

Weak induction

But notice that one need not make such a strong inference with induction because there are two types, the other being weak induction. Weak induction has the following form:

A1 is a B1.
A2 is a B2.


An is a Bn.
Therefore, the next A will be a B.

An example of weak induction is that because every raven that has ever been observed has been black, the next observed raven will be black.

Mathematical induction

Enumerative induction should not be confused with mathematical induction. While enumerative induction concerns matters of empirical fact, mathematical induction concerns matters of mathematical fact. Specifically, mathematical induction is what mathematicians use to make claims about an infinite set of mathematical objects. Mathematical induction is different from enumerative induction because mathematical induction guarantees the truth of its conclusions since it rests on what is called an “inductive definition” (sometimes called a “recursive definition”).

Inductive definitions define sets (usually infinite sets) of mathematical objects. They consist of a base clause specifying the basic elements of the set, one or more inductive clauses specifying how additional elements are generated from existing elements, and a final clause stipulating that all of the elements in the set are either basic or in the set because of one or more applications of the inductive clause or clauses (Barwise and Etchemendy 2000, 567). For example, the set of natural numbers (N) can be inductively defined as follows:

1. 0 is an element in N 2. For any element x, if x is an element in N, then (x + 1) is an element in N. 3. Nothing else is an element in N unless it satisfies condition (1) or (2).

Thus, in this example, (1) is the base clause, (2) is the inductive clause, and (3) is the final clause. Now inductive definitions are helpful because, as mentioned before, mathematical inductions are infallible precisely because they rest on inductive definitions. Consider the following mathematical induction that proves the sum of the numbers between 0 and a natural number n (Sn) is such that Sn = ½n(n + 1), which is a result first proven by the mathematician Carl Frederick Gauss [1777-1855]:

First, we know that 0 = ½(0)(0 + 1) = 0. Now assume Sm = ½m(m + 1) for some natural number m. Then if Sm + 1 represents Sm + (m + 1), it follows that Sm + (m + 1) = ½m(m + 1) + (m + 1). Furthermore, since ½m(m + 1) + (m + 1) = ½m2 + 1.5m + 1, it follows that ½ m2 + 1.5m + 1 = (½m + ½)(n + 2). But then, (½m + ½)(n + 2) = ½(m + 1)((n + 1) + 1). Since the first subproof shows that 0 is in the set that satisfies Sn = ½n(n + 1), and the second subproof shows that for any number that satisfies Sn = ½n(n + 1), the natural number that is consecutive to it satisfies Sn = ½n(n + 1), then by the inductive definition of N, N has the same elements as the set that satisfies Sn = ½n(n + 1). Thus, Sn = ½n(n + 1) holds for all natural numbers.

Notice that the above mathematical induction is infallible because it rests on the inductive definition of N. However, unlike mathematical inductions, enumerative inductions are not infallible because they do not rest on inductive definitions.

Non-inductive reasoning

Induction contrasts with two other important forms of reasoning: Deduction and abduction.

Deduction

Deduction is a form of reasoning whereby the premises of the argument guarantee the conclusion. Or, more precisely, in a deductive argument, if the premises are true, then the conclusion is true. There are several forms of deduction, but the most basic one is modus ponens, which has the following form:

If A, then B
A
Therefore, B

Deductions are unique because they guarantee the truth of their conclusions if the premises are true. Consider the following example of a deductive argument:

Either Tim runs track or he plays tennis.
Tim does not play tennis.
Therefore, Tim runs track.

There is no way that the conclusion of this argument can be false if its premises are true. Now consider the following inductive argument:

Every raven that has ever been observed has been black.
Therefore, all ravens are black.

This argument is deductively invalid because its premises can be true while its conclusion is false. For instance, some ravens could be brown although no one has seen them yet. Thus a feature of induction is that they are deductively invalid.

Abduction

Abduction is a form of reasoning whereby an antecedent is inferred from its consequent. The form of abduction is below:

If A, then B
B
Therefore, A

Notice that abduction is deductively invalid as well because the truth of the premises in an abductive argument does not guarantee the truth of their conclusions. For example, even if all dogs have legs, seeing legs does not imply that they belong to a dog.

Abduction is also distinct from induction, although both forms of reasoning are used amply in everyday as well as scientific reasoning. While both forms of reasoning do not guarantee the truth of their conclusions, scientists since Isaac Newton (1643-1727) have believed that induction is a stronger form of reasoning than abduction.

The problem of induction

David Hume questioned whether induction was a strong form of reasoning in his classic text, A Treatise of Human Nature. In this text, Hume argues that induction is an unjustified form of reasoning for the following reason. One believes inductions are good because nature is uniform in some deep respect. For instance, one induces that all ravens are black from a small sample of black ravens because he believes that there is a regularity of blackness among ravens, which is a particular uniformity in nature. However, why suppose there is a regularity of blackness among ravens? What justifies this assumption? Hume claims that one knows that nature is uniform either deductively or inductively. However, one admittedly cannot deduce this assumption and an attempt to induce the assumption only makes a justification of induction circular. Thus, induction is an unjustifiable form of reasoning. This is Hume's problem of induction.

Instead of becoming a skeptic about induction, Hume sought to explain how people make inductions, and considered this explanation as good of a justification of induction that could be made. Hume claimed that one make inductions because of habits. In other words, habit explains why one induces that all ravens are black from seeing nothing but black ravens beforehand.

The new riddle of induction

Nelson Goodman (1955) questioned Hume’s solution to the problem of induction in his classic text Fact, Fiction, and Forecast. Although Goodman thought Hume was an extraordinary philosopher, he believed that Hume made one crucial mistake in identifying habit as what explains induction. The mistake is that people readily develop habits to make some inductions but not others, even though they are exposed to both observations. Goodman develops the following grue example to demonstrate his point:

Suppose that all observed emeralds have been green. Then we would readily induce that the next observed emerald would be green. But why green? Suppose "grue" is a term that applies to all observed green things or unobserved blue things. Then all observed emeralds have been grue as well. Yet none of us would induce that the next observed emerald would be blue even though there would be equivalent evidence for this induction.

Goodman anticipates the objection that since "grue" is defined in terms of green and blue, green and blue are prior and more fundamental categories than grue. However, Goodman responds by pointing out that the latter is an illusion because green and blue can be defined in terms of grue and another term "bleen," where something is bleen just in case it is observed and blue or unobserved and green. Then "green" can be defined as something observed and grue or unobserved and bleen, while "blue" can be defined as something observed and bleen or unobserved and grue. Thus the new riddle of induction is not about what justifies induction, but rather, it is about why people make the inductions they do given that they have equal evidence to make several incompatible inductions?

Goodman’s solution to the new riddle of induction is that people make inductions that involve familiar terms like "green," instead of ones that involve unfamiliar terms like "grue," because familiar terms are more entrenched than unfamiliar terms, which just means that familiar terms have been used in more inductions in the past. Thus statements that incorporate entrenched terms are “projectible” and appropriate for use in inductive arguments.

Notice that Goodman’s solution is somewhat unsatisfying. While he is correct that some terms are more entrenched than others, he provides no explanation for why unbalanced entrenchment exists. In order to finish Goodman’s project, the philosopher Willard Van Orman Quine (1956-2000) theorizes that entrenched terms correspond to natural kinds.

Quine (1969) demonstrates his point with the help of a familiar puzzle from the philosopher Carl Hempel (1905-1997), known as "the ravens paradox:"

Suppose that observing several black ravens is evidence for the induction that all ravens are black. Then since the contrapositive of "All ravens are black" is "All non-black things are non-ravens," observing non-black things such as green leafs, brown basketballs, and white baseballs is also evidence for the induction that all ravens are black. But how can this be?

Quine (1969) argues that observing non-black things is not evidence for the induction that all ravens are black because non-black things do not form a natural kind and projectible terms only refer to natural kinds (e.g. "ravens" refers to ravens). Thus terms are projectible (and become entrenched) because they refer to natural kinds.

Even though this extended solution to the new riddle of induction sounds plausible, several of the terms that we use in natural language do not correspond to natural kinds, yet we still use them in inductions. A typical example from the philosophy of language is the term "game," first used by Ludwig Wittgenstein (1889-1951) to demonstrate what he called “family resemblances.”

Look at how competent English speakers use the term "game." Examples of games are Monopoly, card games, the Olympic games, war games, tic-tac-toe, and so forth. Now, what do all of these games have in common? Wittgenstein would say, “nothing,” or if there is something they all have in common, that feature is not what makes them games. So games resemble each other although they do not form a kind. Of course, even though games are not natural kinds, people make inductions with the term, "game." For example, since most Olympic games have been in industrialized cities in the recent past, most Olympic games in the near future should occur in industrialized cities.

Given the difficulty of solving the new riddle of induction, many philosophers have teamed up with mathematicians to investigate mathematical methods for handling induction. A prime method for handling induction mathematically is statistical inference, which is based on probabilistic reasoning.

Statistical inference

Instead of asking whether all ravens are black because all observed ravens have been black, statisticians ask what is the probability that ravens are black given that an appropriate sample of ravens have been black. Here is an example of statistical reasoning:

Suppose that the average stem length out of a sample of 13 soybean plants is 21.3 cm with a standard deviation of 1.22 cm. Then the probability that the interval (20.6, 22.1) contains the average stem length for all soybean plants is .95 according to Student’s t distribution (Samuels and Witmer 2003, 189).

Despite the appeal of statistical inference, since it rests on probabilistic reasoning, it is only as valid as probability theory is at handling inductive reasoning.

Probabilistic inference

Bayesianism is the most influential interpretation of probability theory and is an equally influential framework for handling induction. Given new evidence, "Bayes' theorem" is used to evaluate how much the strength of a belief in a hypothesis should change.

There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.

Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or "prior probabilities"; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and "transformation groups" are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.

"Cox's theorem," which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic. Nevertheless, how well probabilistic inference handles Hume’s original problem of induction as well as Goodman’s new riddle of induction is still a matter debated in contemporary philosophy and presumably will be for years to come.

References
ISBN links support NWE through referral fees

  • Barwise, Jon and John Etchemendy. 2000. Language, Proof and Logic. Stanford: CSLI Publications.
  • Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press.
  • Hume, David. 2002. A Treatise of Human Nature (David F. and Mary J. Norton, eds.). Oxford: Oxford University Press.
  • Quine, W.V.O. 1969. Ontological Relativity and Other Essays. New York: Columbia University Press.
  • Samuels, Myra and Jeffery A. Witmer. 2003. Statistics for the Life Sciences. Upper Saddle River: Pearson Education.
  • Wittgenstein, Ludwig. 2001. Philosophical Investigations (G.E.M. Anscombe, trans.). Oxford: Blackwell.

External links

All links retrieved March 2, 2018.

General philosophy sources

Credits

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

The history of this article since it was imported to New World Encyclopedia:

Note: Some restrictions may apply to use of individual images which are separately licensed.