Maarten Maartensz:    Philosophical Dictionary | Filosofisch Woordenboek                      

 I - Induction


Induction: To confirm or infirm (support or undermine) assumptions by showing their conclusions do (not) conform to the observable facts.

There is a related sense of "induction", namely generalization or hypothesis, but for various reasons this is better named abduction.

The definition given above is not the standard one, for which the reader is referred to Problems of Induction and Mill's Methods of Induction.

1. The sense of "induction" used in this lemma rests much on probability theory, and is related to the deductive fallacy of affirming the conclusion, which it avoids.

First, to explain this fallacy. One basic deductively valid rule of inference is modus ponens: From (A) and (A-->B), it follows that (B). The converse of this, from (B) and (A-->B), it follows that (A), is the fallacy of affirming the conclusion. It is easy to see this is a fallacy - take (A)=(This is a cat) and (B)=(This is an animal) - yet obvious that this fallacy is quite common (especially in political argumentation), and also obvious that it conforms to an intuition that may be stated thus: From (B) and (A-->B), it follows that (A) is more probable than it was before learning (B). For to use the same example: Learning that (This is an animal) at least excludes the cases that (This is a plant) etc. and thus makes the hypothesis that (This is a cat) somewhat more probable, if not much.

Second, to show how this intuition is supported by probability theory. In probability theory, there is an elementary theorem that p(A&B) <= p(A). Indeed, this follows from the theorem that p(A)=p(A&B)+p(A&~B).

Also, by definition p(A&B)=p(B|A).p(A), where p(B|A) is the conditional probability that B given that A. In these terms, and still using the above example, what we are interested in is p(A|B) which is equal to p(B|A).p(A) : p(B).

Now, supposing that indeed B follows deductively from A, p(B|A)=1 and so p(A|B)=p(A):p(B). Since p(B)<=1, it follows on the same supposition that p(A|B)>=p(A) - and thus we know that if we learn that (B) and know that (A-->B) we can infer that p(A|B) is at least as great as p(A) before learning that (B) and is greater if p(B) itself is less than 1, which is the normal case, since we need not to explain (B) if we know it is certain to start with.

This explains in principle how we can confirm or infirm assumptions using probability theory, and how this avoids the deductive fallacy of confirmation, namely essentially through the probabilistic analysis of conjunction, that is more subtle than the deductive one.

2. The account just given also explains in principle how one can learn from experience, namely by framing a hypothesis that accounts for the facts one wants to explain, and then by deriving consequences from it that can be tested in experience, that are not certainly true, and then proceeding as above.

The reason to insert the qualification "in principle" is that the above account is not complete, especially in the following two important respects:

(I) The account does not give an explanation or rules to attribute probabilities to either theories or empirical predictions deduced from them, and
(II) Probability theory by itself does not provide any means to settle the probability of any statement that is not 0 nor 1.

For more, see: Problem of Induction


See also: Abduction, Deduction, Eduction, Inference, Logic, Probability, Problem of Induction


Ayer, Hume, Goodman, Howe & Urbach, Reichenbach, Rescher, Russell, Salmon, Stegmüller,

 Original: Aug 22, 2004                                                Last edited: 12 December 2011.   Top