Inference

From New World Encyclopedia
Jump to: navigation, search

Inference is the act or process of deriving a conclusion based on what one already knows or on what one assumes. The statement(s) given as evidence for or that supposedly lead to the conclusion are known as premise(s).

Inference is studied within several different fields.

  • Human inference (i.e., how humans draw conclusions) is traditionally studied within the field of cognitive psychology.
  • Logic studies the laws of valid inference.
  • Statisticians have developed formal rules for inference from quantitative data.
  • Artificial intelligence researchers develop automated inference systems.

Contents

Three types of logical inference

Since the time of the American philosopher Charles Sanders Peirce (1839-1914)—who invented the term "abduction" and discussed what he called "abductive inference"—three types of inference have usually been acknowledged and discussed:

  • Deduction, a form of inference in which, if the premises are true, the conclusion must be true. (It is sometimes called reasoning from the rule to the individual instance, but that is not strictly speaking correct.) Only deductive inferences can be valid, as will be shown below.
  • Induction, an inference that leads to a rule or principle or general conclusion, based on observation of a sample or on observation of a case or instance. For example, "The sample of marbles we drew from the jar had 40% black ones and 60% red ones, thus we conclude that the entire population of marbles in that jar is 40% black and 60% red." Another example, "Every time we have put chemical X into acid, the mixture has turned red. Thus we conclude that chemical X turns acids red."
  • Abduction, an inference of the form, "Such and such phenomena (or conclusion) are observed. If X (an explanation or rule) were true and applied to this case, it would explain the phenomena (or conclusion). Thus X is likely the case (or is probably the correct explanation of what happened)." E.g., "My car won't start; the starter motor just makes a groaning noise and doesn't turn over quickly when I turn the key to the start position. If my battery were dead, this would explain the problem. Thus I conclude that my battery is dead."

Valid inferences

Deductive inferences are either valid or invalid, but not both. Philosophical logic has attempted to define the rules of proper inference, i.e., the formal rules that, when correctly applied to true premises, lead to true conclusions. Aristotle has given one of the most famous statements of those rules in his Organon. Modern mathematical logic, beginning in the nineteenth century, has built numerous formal systems that embody Aristotelian logic (or variants thereof).

A valid argument form is defined as one that guarantees that if the premises are true, then the conclusion must be true; another way of saying this is that a valid argument form is truth-preserving or truth-transferring.

We must speak and think of valid argument forms, since every valid argument is an instance of (an example of or an expression of) a valid argument form. It is the form of the argument that makes it valid (or invalid).

Notice that an argument can be valid—i.e., it can have a valid argument form—even if one or more of its premises are false because what is guaranteed by a valid argument form is that if the premise(s) are true, then the conclusion must be true. That an argument has a valid argument form does not, however, guarantee the truth of any of the premises, or the truth of the conclusion if it has at least one false premise.

Validity and Soundness in Deductive Inferences

Logicians distinguish between valid and sound deductive inferences. A valid deductive inference (or argument) is one that fits or exhibits a valid argument form. A sound argument is one that satisfies two conditions: (1) it must be valid (i.e., have a valid argument form), and (2) all of its premises must actually be true. Sound arguments will necessarily have true conclusions, but a valid argument may have a false conclusion if at least one of the premises is false.

For example:

All cars are Toyotas.
This is a car. (True premise, but actually, it's a Ford)
---------------------------------------------------------
Therefore this is a Toyota. (False conclusion)


The argument form of that argument is valid, but the argument is unsound because it has at least one false premise. The conclusion may be false because the car was a Ford (or other make of car). The problem is that the first premise of this argument—"All cars are Toyotas."—is false.

Strict validity and soundness are applicable to (or properties of) only deductive inferences, because deductive inferences are the only kind that can guarantee that if the premises are true, then the conclusion must be true.

In all other nondeductive forms of inference—induction, abduction, or whatever other kinds there may be—it is always possible, even in the best or strongest of such nondeductive inferences, for the premises all to be true but the conclusion nevertheless be false. So, strictly speaking, all nondeductive inferences are invalid.

Since all forms of argument other than deduction are, strictly speaking, invalid, the terms "valid" and "invalid" should be reserved for discussion of deductive inferences. For inductive and abductive inferences, the terms "strong" or "weak" inductive or abductive argument should be used. Admittedly, some people use the term "valid" to mean "true," and they speak of "valid inductive arguments" when they mean is "strong inductive arguments." To avoid confusion, it is better to reserve the term "valid" for only those deductive arguments that have valid argument forms, and to use the terms "true" and "false" for statements, and the terms "strong" and "weak" for inductive and abductive arguments.

An example: the classic syllogism

Greek philosophers defined a number of syllogisms, correct three-part inferences, that can be used as building blocks for more complex reasoning. We'll begin with the most famous of them all:

All men are mortal
Socrates is a man
------------------
Therefore Socrates is mortal.

The reader can check that the premises and conclusion are true. The validity of the inference may not be true. The validity of the inference depends on the form of the inference. That is, a valid inference does not depend on the truth of the premises and conclusion, but on the formal rules of inference being used. In traditional logic, the form of the syllogism is:

All A is B
All C is A
----------
All C is B

Since the syllogism fits this form, then the inference is valid. And if the premises are true, then the conclusion is necessarily true.

In predicate logic (a simple but useful formalization of Aristotelician logic), this syllogism can be stated as follows:

∀ X, man(X) → mortal(X)
man(Socrates)
-------------------------------
∴mortal(Socrates)

Or in its general form:

∀ X, A(X) → B(X)
A(x)
------------------------
∴B(x)

∀, the universal quantifier, is pronounced "for all." It allows us to state a general property. Here it is used to say that "if any X is a man, X is also mortal." Socrates is a man, and the conclusion follows.


Consider the following:

All fat people are musicians
John Lennon was fat
-------------------
Therefore John Lennon was a musician

In this case we have two false premises that imply a true conclusion. The inference is valid because it follows the form of a correct or valid inference, but the inference is unsound—even though the conclusion is true—because at least one of the premises is false.

Accuracy of Inductive and Deductive Inferences

A conclusion inferred from or made on the basis of multiple observations is made by the process of inductive reasoning. The conclusion may be true or false, correct or incorrect, and may be tested by additional observations. In contrast, the conclusion of a valid deductive inference is necessarily true if the premises are true. The conclusion is inferred using the process of deductive reasoning.

By contrast with induction and abduction, a valid deductive inference cannot lead to a false conclusion if the premises are true. This is because the validity of a deductive inference is formal. The inferred conclusion of a valid deductive inference is necessarily true if the premises it is based on are true. In every other form of inference (i.e., every nondeductive inference) it is entirely possible for the conclusion to be false even though all the premises are true.

Nondeductive inferences, especially induction

The problem of the invalidity of all inductive inferences—often known as the "problem of induction," and also as "Hume's problem"—was first presented in detail by philosopher David Hume (1711-1776). Since then an enormous amount of thought, discussion, and ink has been devoted to this problem. Some have gone so far as to declare that since induction is the method of science, it ipso facto has to be good. Others have tried to adopt some additional premise—such as assuming that the future will be like the past—to cover over the problem. Karl Popper thought he had solved the problem through his method of falsification, a method that relies on the valid deductive inference-form of modus tollens.

Since inductive inferences are, by definition, invalid, some other means of assessing them is needed, assuming that they are going to be countenanced at all. John Stuart Mill produced a set of criteria known as "Mill's Methods" for distinguishing between strong and weak inductive inferences. Other criteria have been introduced and championed by other philosophers and logicians.

Since the terms valid and invalid do not, strictly speaking, apply here, some other ones are needed to assess the acceptability or non-acceptability of nondeductive inferences, and the terms usually used are strong and weak.

Fallacies

An incorrect inference is known as a fallacy.

Philosophers who study informal logic—logic based not on the form of the inference, but on the content—have compiled large lists of what are usually called informal fallacies, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning. Some of the best known informal fallacies are:

  • ad hominem (attacking the person instead of his argument or reasons);
  • argumentum ad baculum (threatening to harm the respondent if he does not accept your argument or conclusion);
  • the bandwagon argument (arguing that because everyone else is getting aboard this program or accepting this argument, you should do so too);
  • the red herring (dragging a distraction across the discussion or argument to distract the hearer from examining it properly),
  • and the slippery slope (arguing that if one embarks on or accepts a first step in something, this will lead inevitably downward to a conclusion that is undesirable).

Most introductory logic textbooks have large lists and discussions of informal fallacies.

For some invalid supposedly deductive arguments, there are what is known as "formal fallacies." These are argument forms that mimic, superficially, valid deductive argument forms, but have a mistake in them that renders their form invalid.

For example, affirming the consequent (which is an invalid argument form) superficially resembles Modus Ponens (a valid argument form), which is affirming the antecedent:

The valid form of Modus Ponens is:

If A, then B.
This individual is an A.
---------------------------
Thus this individual is a B.


The invalid argument form is:

If A, then B.
This individual is a B.
----------------------------
Thus this individual is an A.


For example:

If an animal is a mammal, then it has a vertebra.
This animal has a vertebra.
------------------------------------------------------------------
Thus this animal is a mammal. (But the animal is actually a bird.)

This example shows that the argument form is invalid because both premises can be true while the conclusion is false.

Automatic logical inference

Although now somewhat past their heyday, AI systems for automated logical inference once were extremely popular research topics, and have known industrial applications under the form of expert systems.

An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.

An example: inference using Prolog

Prolog (Programming in Logic) is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from the KB using an algorithm called backward chaining.

Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code:

mortal(X) :-  man(X).
man(socrates). 

This states that all men are mortal and that Socrates is a man. Now we can ask Prolog about Socrates.

?- mortal(socrates).

Yes 

On the other hand :

?- mortal(plato).

No 

This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.

Inference and Uncertainty

Traditional logic is only concerned with certainty - one progresses from certain premises to certain conclusions. There are several motivations for extending logic to deal with uncertain propositions and weaker modes of reasoning.

  • Philosophical motivations
    • A large part of our everyday reasoning does not follow the strict rules of logic, but is nevertheless effective in many cases
    • Science itself is not deductive, but largely inductive, and its process cannot be captured by standard logic (see problem of induction).
  • Technical motivations
    • Statisticians and scientists wish to be able to infer parameters or test hypothesis on statistical data in a rigorous, quantified way.
    • Artificial intelligence systems need to reason efficiently about uncertain quantities.

Common sense and uncertain reasoning

The reason most examples of applying deductive logic, such as the one above, seem artificial is because they are rarely encountered outside fields such as mathematics. Most of our everyday reasoning is of a less "pure" nature.

To take an example: suppose you live in an apartment. Late at night, you are awoken by creaking sounds in the ceiling. You infer from these sounds that your neighbor upstairs is having another bout of insomnia and is pacing in his room, sleepless.

Although that reasoning seems sound, it does not fit in the logical framework described above. First, the reasoning is based on uncertain facts: what you heard were creaks, not necessarily footsteps. But even if those facts were certain, the inference is of an inductive nature: perhaps you have often heard your neighbor at night, and the best explanation you have found is that he or she is an insomniac. Hence tonight's footsteps.

It is easy to see that this line of reasoning does not necessarily lead to true conclusions: perhaps your neighbor had a very early plane to catch, which would explain the footsteps just as well. Uncertain reasoning can only find the best explanation among many alternatives.

Bayesian statistics and probability logic

Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features - one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic," following E. T. Jaynes).

Bayesianists identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.

Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem, which gave its name to the field.

See Bayesian inference for examples.

Nonmonotonic logic

Source: Article of André Fuhrmann about "Nonmonotonic Logic"

A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is nonmonotonic. Deductive inference, at least according to the canons of classical logic, is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added.

By contrast, everyday reasoning is mostly nonmonotonic because it involves risk: we jump to conclusions from deductively insufficient premisses. We know when it is worth or even necessary (e.g., in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce’s theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.

Inference in Indian Logic

see main article logic in Indian Philosophy

Inference is the basic component of logic in Indian philosophy. Systematic study of logical reasoning can be traced back to the antiquity. Logic in Indian philosophy is closely tied to its metaphysics.

See also

References

  • Hacking, Ian. An Introduction to Probability and Inductive Logic. Cambridge University Press, 2000. ISBN 0521775019.
  • Jaynes, Edwin Thompson. Probability Theory: The Logic of Science. Cambridge University Press, 2003. ISBN 0521592712.
  • McKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2002. ISBN 0521642981.
  • Russell, Stuart, and Peter Norvig. Artificial Intelligence: A Modern Approach, second ed. Prentice Hall, 2002. ISBN 0137903952
  • Tijms, Henk. Understanding Probability: Chance Rules in Everyday Life, second ed. Cambridge University Press, 2007. ISBN 0521701724
  • Fuhrmann, André, Nonmonotonic Logic. Retrieved April 8, 2008.

External Links

All links retrieved April 15, 2014.

General Philosophy Sources


Credits

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Research begins here...
Share/Bookmark