(Work In Progress: output at 07/08/2020 20:28:10)

(For earlier versions of this Note, see the table at the end)

__Background__

- This is one of my BA essays on the problem of induction, investigating Hempel's paradox of the ravens. It was written in May 2002.
- It was originally only available as a PDF. It has now been converted to Note format as below.
- Currently it is as originally written. I may add further comments in due course.

- The question asks us not so much to
*explain*Hempel’s paradox as to*draw lessons from it*. We’re asked whether background information always influences what counts as confirming a hypothesis. In this essay, I will firstly describe what Hempel’s Paradox is, including a quick check that it is not a pseudo-paradox. Then I’ll review Nelson Goodman’s response to the paradox, which is to accept the premise of the question. This response is then tested out rather more quantitatively with some Bayesian confirmation theory. Finally, I reach a conclusion that supports the premise of the question. - This essay is primarily indebted both to "Goodman (Nelson) - Fact, Fiction and Forecast",
__Chapter III, §3___{1}and to lecture notes by Dorothy Edgington, who takes a more quantitative, Bayesian approach.

- First of all, we need to state what Hempel’s paradox is.
- Briefly, the paradox runs as follows:
- Hypotheses are confirmed by their instances. That is, if a hypothesis H predicts X, and X occurs, then confidence in H increases.
- The hypothesis (H
_{1}) that all ravens are black is logically equivalent to the hypothesis (H_{2}) that all non-black things are non-ravens. - Consequently, evidence for the H
_{2}ought to be considered evidence for H_{1}(and vice versa). - Therefore, this white sheet of paper confirms not only that non-black things are non-ravens but also that all ravens are black.

- We ought first of all to check that there’s nothing wrong with the reasoning behind the paradox itself.
- This is basically obvious.
- Premise (1), cannot be gainsaid if evidence is to be relevant to our choices of hypotheses.
- For Premise (2), we simply have to check that the two hypotheses would be confirmed or falsified by the same states of affairs. This is seen by noting that Rx ⊃ Bx ←→ ¬Bx ⊃ ¬Rx, the equivalence carrying through to the universally-quantified expressions.
- Premise (3), while the root cause of the paradox, just seems intuitively obvious; the two hypotheses are satisfied by the same states of affairs.
- The conclusion (4) follows from the premises.

- Goodman’s answer to the paradox – which “makes the prospects for indoor ornithology vanish” – is that the example unfairly relies on undisclosed background information. Evidence confirms or disconfirms any number of hypotheses, but which ones we entertain will depend on our background knowledge.
- The fact that a given evidential object is neither black nor a raven confirms the following hypotheses:
- H
_{1}: All ravens are black. - H
_{2}: Everything that is not black is not a raven. - H
_{3}: Everything that is not a raven is not black. - H
_{4}: Nothing is either black or a raven

- H
- From our background information, two of these hypotheses (H
_{3}& H_{4}) are obviously false and the other two are equally obviously true, but we ignore the false ones because we are familiar, respectively, with lots of black non-ravens, and lots of things that are either black or ravens. According to Goodman, we are not allowed to assume this background knowledge. Given that our evidence confirms H_{4}– that there are no ravens – it’s not surprising that it confirms H_{1}– that, if there were any ravens, they would be black (since conditionals with false antecedents are trivially true). However, it also confirms the hypothesis that if there were any ravens, they would__not__be black. - This seems fair enough, but it will help to look at the problem another way, using Bayesian probability.

__Quick Proof of Bayes’ Theorem__- We start off with the basic truth of conditional probability, irrespective of the interpretation of the events E and H, that p(E&H) = p(E) x p(H|E). Similarly, p(H&E) = p(H) x p(E|H). So, given that p(E&H) = p(H&E) we have the simple form of Bayes’ theorem, that:
- p(H|E) / p(H) = p(E|H) / p(E)

- The interpretation of this is as follows. The probability of an hypothesis, H, given a piece of evidence E, divided by the probability of that hypothesis before the evidence, equals the probability of the evidence, E, on the assumption the hypothesis is true, divided by the probability of the evidence irrespective of the truth of the hypothesis.
- What we want is evidence that increases the probability of our hypothesis, ie. we want p(H|E) > p(H). This obviously occurs if p(H|E) / p(H) > 1, and, consequently (from our equation), if p(E|H) / p(E) >1. So, our evidence increases the probability of our hypothesis if p(E|H) / p(E) > 1. This is also evident from the standard form of Bayes’ Theorem:
- p(H|E) = p(H) * p(E|H) / p(E) …………. (*)

- We start off with the basic truth of conditional probability, irrespective of the interpretation of the events E and H, that p(E&H) = p(E) x p(H|E). Similarly, p(H&E) = p(H) x p(E|H). So, given that p(E&H) = p(H&E) we have the simple form of Bayes’ theorem, that:
__Application of Bayes’ Theorem to Coin-Tossing__- It is useful to see Bayes’ Theorem in action with a simple example, to highlight issues that will arise in the ensuing discussion, so we’ll imagine we’re tossing a coin, and our hypothesis H is that it’s double-headed. This raises the issue of
*sample spaces*. We restrict the outcome of the experiment of coin tossing to heads or tails. We ignore the possibility of outcomes where coins roll away, land on their edge or never come down. - We’ll suppose that we’re initially highly sceptical, and give H only a 1% chance. This points out the first limitation of Bayesian methods – we have to “seed” the formula with some initial probability, which is simply our initial degree of confidence in the hypothesis. As we will see, it’s easy to have inconsistent beliefs where degrees of confidence are concerned.
- If the first toss lands tails, our scepticism is rewarded, for (using (*) and assuming that p(E) = 0.5 since most coins are of the head/tails variety) p(H|E) = 0.01*0/0.5 = 0. This is because p(E|H), the probability of tails on the hypothesis that the coin is double-headed is zero. Our evidence has falsified the hypothesis. Since p(H|E) = 0 becomes the p(H) in the next round of trials, p(H|E) stays at zero throughout subsequent trials, no matter how many heads we get. This is in accord with intuition. If we’ve seen a tail, the coin can’t be double-headed.
- Say the first toss lands heads. Then, p(H|E) = 0.01*1/0.5 = 0.02, since p(E|H), the probability of heads on the hypothesis that the coin is double-headed, is 1. What about the second round of trials? We set p(H) to 0.02 and p(E) is again 1. What about p(E), presupposing that E is another head? There seems no good reason to budge from our original presupposition of an unbiased coin, so p(E) is again 0.5, and p(H|E) = 0.02*1/0.5 = 0.04. It’s easy to see that after 7 successive heads, we’d end up with p(H|E) = 1.28, which is invalid! There are two lessons to be learnt from this:
- Firstly, since we can consider 7 successive heads as one piece of evidence, our initial confidence of 1% was inconsistent with this piece of potential evidence and our belief that we had a normal coin, demonstrated by assigning p(E) = 1/128.
- Secondly, and relatedly, as the number of consecutive heads increases, our confidence in a fair coin must decrease if we are to maintain consistent beliefs.

- It is useful to see Bayes’ Theorem in action with a simple example, to highlight issues that will arise in the ensuing discussion, so we’ll imagine we’re tossing a coin, and our hypothesis H is that it’s double-headed. This raises the issue of
__Application of Bayes’ Theorem to the Raven Paradox__- Now let’s interpret Bayes’ Theorem using H
_{1}(“all ravens are black”) for H, and two separate, alternative pieces of evidence, E_{1}and E_{2}, for E.- E
_{1}is the occurrence of a black raven, and - E
_{2}is the occurrence of a non-black non-raven.

- E
- Clearly, we cannot estimate p(E
_{i}|H_{1}) or p(E_{i}) without background information – that is, without some knowledge of the world. To come to a conclusion, we also need to choose our sample spaces carefully.- For E
_{1}, I’ll assume our sample space is of*ravens*, to simplify calculating the probabilities. Then, if H_{1}(all ravens are black) is__true__, E_{1}(a*black*raven) would be certain (since our sample space is restricted to ravens, we’re guaranteed a raven). So, p(E_{1}|H_{1}) = 1. However, p(E_{1}) is to be determined in the absence of the hypothesis. What’s the probability of a sample raven being black, if we don’t know what colour they’re supposed to be? This depends on where we sampled the raven, so let’s assume it was selected from an__English garden___{2}where birds are one of four colours (for the sake of argument; black, white, brown and “other”, in equal measure). Hence p(E_{1}) = 0.25. So, p(E_{1}|H_{1}) / p(E_{1}) = 4, and our confidence that all ravens are black has increased fourfold as a result of finding a black raven. Of course, setting p(E_{1}) = 0.25 would be inconsistent with holding p(H_{1}) > 0.25, as then we’d end up with the impossible p(H|E) > 1. For subsequent trials, we have the same dilemma as in the coin-tossing case. Our expectation p(E) of a black raven has to rise in line with the credence we give to H_{1}(all ravens are black), otherwise we have inconsistent beliefs. - For E
_{2}, I’ll assume our sample space is “non-black things likely to be found in a garden”. The thing we’ve actually found, E_{2}, is a non-black non-raven. We want to know the probability that it’s not a raven, given that it’s non-black. If H_{1}is true, ie. all ravens are black, then anything that isn’t black can’t be a raven. So, p(E_{2}|H_{1}) = 1. What about p(E_{2})? This is the probability of selecting a non-raven from all the nonblack things in my garden. Well, there are probably millions of non-black things in my garden (depending on what counts as a thing), and, even if there are non-black ravens, there aren’t likely to be many of them in my garden, since there are rarely any ravens there at all, at least compared with the number of other things. So, p(E_{2})≈1. Hence, our evidence of a green leaf has hardly shifted our confidence that all ravens are black at all.

- For E
- This shows the importance of the background evidence. However, the killer blow is supplied by considering possible worlds in which the evidence of a green leaf
*would*be strong support for the hypothesis that all ravens are black. Such a world is one in which most things are black and most things are ravens, (though it is still theoretically possible for some of the few non-black things to be ravens). We return to our two cases:- For E
_{1}, we again assume our sample space is*ravens*. Again, if H_{1}(all ravens are black) is true, E_{1}(a*black*raven) would be certain. So, p(E_{1}|H_{1}) = 1. However, p(E_{1}) is to be determined in the absence of the hypothesis. What’s the probability of a sample raven being black, in our new black- and raven-dominated world? Let’s suppose 95% of things are black, then it seems reasonable to suppose p(E_{1}) = 0.95. So, p(E_{1}|H_{1}) / p(E_{1}) = 1.00/0.95, and our confidence that all ravens are black has only increased by 5% as a result of finding a black raven. - For E
_{2}, I’ll assume our sample space is “non-black things”. The thing we’ve actually found, E_{2}, is a non-black non-raven. If H_{1}is true, ie. all ravens are black, then again anything that isn’t black can’t be a raven. So, p(E_{2}|H_{1}) = 1. What about p(E_{2}) this time? Let’s suppose 90% of things are ravens. What we want is the probability of selecting a non-raven from all the non-black things. Well, in this strange world, the colour of the average raven could be anything, so we’ve no reason to think our nonblack thing to be any less likely to be a raven than the frequency of ravens in the general population. So, we have a 90% chance of our non-black thing being a raven, ie. only a 10% chance of it being a non-raven. So, p(E_{2}) 0.1, and p(E_{2}|H_{1}) / p(E_{2}) = 1.0/0.1 = 10. Hence, our evidence of a green leaf has increased our confidence that all ravens are black by a factor of 10.

- For E

- Now let’s interpret Bayes’ Theorem using H

- So, is our questioner correct to say that the lesson of Hempel’s Paradox is that whether observations confirm a hypothesis is never independent of background information? The question is important in that it points out that Hempel’s paradox is not something to be explained and then ignored – that is, solved or dissolved – but is a positive contribution to confirmation theory, showing the importance of taking into account all the information in making inductive inferences.
- It’s not totally clear from the foregoing discussions that the confirmation of a hypothesis is always dependent on background information, but I think we can see that it is. The reason this doesn’t always appear to be the case is that the background is sometimes so taken for granted that we often fail to notice it. It’s only when we think of radically different worlds, like the black world infested by ravens, that the background assumptions are pointed out.
- I think we can tell from the Bayesian approach that background information is
*always*important in evaluating evidence. We can only advance the probability of our hypothesis if p(E|H) / p(E) >> 1. We can only do this if p(E) is low – ie. the evidence is unexpected – or at least is unexpected relative to what we might expect on the basis of our hypothesis. Evidence is only unexpected against a background of what*is*expected.

- Though remembering that our sample space is of ravens, so the garden only contains ravens. However, we imagine them to be coloured like other garden birds.

Date |
Length |
Title |

01/08/2017 00:11:31 | 300 | Induction |

Note last updated | Reading List for this Topic | Parent Topic |
---|---|---|

07/08/2020 20:28:15 | None available | None |

Aeon Papers | Status: Priority Task List (2020: August) | Status: Summary Task List (2020: July - August) | Status: Summary Task List (YTD: 19Q4 - 20Q3) | Theo Todman's BA Papers |

To access information, click on one of the links in the table above.

Author |
Title |
Medium |
Extra Links |
Read? |

Hains (Brigid) & Hains (Paul) | Aeon: 2019+ | Paper | Yes |

Author |
Title |
Medium |
Source |
Read? |

Earman (John), Ed. | Inference, Explanation and Other Philosophical Frustrations | Book - By Subtopic (via Paper By Subtopic) | Earman (John), Ed. - Inference, Explanation and Other Philosophical Frustrations | 4% |

Goodman (Nelson) | Fact, Fiction and Forecast | Book - Cited | Goodman (Nelson) - Fact, Fiction and Forecast | Yes |

Goodman (Nelson) | The New Riddle of Induction | Paper - Cited | Goodman - Fact, Fiction and Forecast, 4th Edition, 1983, Chapter 3 | Yes |

Hains (Brigid) & Hains (Paul) | Aeon: 2019+ | Paper - Referencing | Hains (Brigid) & Hains (Paul) - Aeon | Yes |

Hains (Brigid) & Hains (Paul) | Aeon: A-B (& General) | Book - Referencing (via Paper Referencing) | Bibliographical details to be supplied | 100% |

Hintikka (Jaakko) | The Concept of Induction in the Light of the Interrogative Approach to Inquiry | Paper - By Subtopic | Earman (John), Ed. - Inference, Explanation and Other Philosophical Frustrations | No |

- Black: Printable Text by me; © Theo Todman, 2020
- Blue: Text by me; © Theo Todman, 2020

© Theo Todman, June 2007 - August 2020. | Please address any comments on this page to theo@theotodman.com. | File output: Website Maintenance Dashboard |

Return to Top of this Page | Return to Theo Todman's Philosophy Page | Return to Theo Todman's Home Page |