Inference

I. Definition

An inference is a process of drawing conclusions based on the evidence. On the basis of some evidence or a “premise,” you infer a conclusion. For example:

Based on this premise… …you can infer:
Weather forecast says 80% chance of thunderstorms It’s a good idea to bring an umbrella
There are over 40 million volumes in the university library They probably have a copy of Plato’s Republic in circulation
My throat is sore and my nose is running I have probably caught a cold
Grapes are poisonous to all dogs Grapes are poisonous for your dog

 

There are also bad inferences, or inferences that may appear persuasive that on further inspection turn out to be misleading. For example:

Based on this premise… ….you should not infer… because:
Weather forecast says 80% chance of thunderstorms There’s a 20% chance of no rain at all With an 80% chance of storms, even if there are no storms there will probably still be rain
There are over 40 million volumes in the university library I will be able to check out a copy of Plato’s Republic The Republic is very widely used, and there’s a decent chance it will be checked out or on reserve
My throat is sore and my nose is running I should take antibiotics Antibiotics should only be used if you have a major illness, and anyway they usually don’t work on colds
Grapes are poisonous to all dogs Dogs cannot eat household fruit Apples and bananas can provide your dog with vital nutrition

The strength of your argument depends entirely on two things: the accuracy of your evidence, and the strength of your inferences. If you have solid evidence and you draw valid inferences, your argument will be complete.

 

II. Types of Inference

There are two basic types of inference:

a. Deduction (or “deductive inference”) is an inference based on logical certainty. It usually starts from a general principle and then infers something about specific cases.

“Grapes are poisonous to all dogs”

This allows you to infer that grapes are poisonous for your dog, too. If the premise is true then the conclusion has to be true. There’s no other possibility. Notice, however, that this doesn’t really tell you anything new: once you say “grapes are poisonous to all dogs,” you already know that grapes are poisonous for your specific dog. Deduction has the advantage of certainty, but it doesn’t generate new knowledge.

b. Induction (or “inductive inference”) is an inference based on probability. It usually starts from specific information and then infers the more general principle.

“For the last two years, Amanda has woken up at 8am every day”

This allows you to infer that Amanda will probably wake up at 8am tomorrow, too. You would probably be right, and it’s a reasonable inference but it’s not certain! Tomorrow could be the first day that Amanda decides to sleep in. Despite this uncertainty, however, induction does offer the possibility of predicting future events and creating new knowledge.


 

III. Inference vs. Observation

An inference starts from a premise (like a piece of evidence) and then moves beyond it. But what about when you just see the evidence for yourself? Do you need to make any inferences then?

It might seem like inference and observation are two very different processes — related, of course, but very different. But in fact it’s not so easy to separate them.

Example

“I saw Marco walk into the grocery store the other day.”

This is a direct observation. It doesn’t seem to involve any inferences. But if you look with a careful, skeptical eye, you’ll see that it contains many inferences — what did you really see?

“I saw someone who looked like Marco walk into the grocery store the other day.”

It’s entirely possible that you made a mistake! It’s easy to mistake people on the street for people you know, so you can’t be entirely sure that you saw what you think you saw. The person could even have been a robot. Or you could have hallucinated the whole thing!

Of course, this is not the sort of thing you really need to worry about — 99% of the time, you’re correct about what you’re seeing. The point is just that observations are never 100% reliable, and always involve a certain amount of inference.

This may sound like abstract quibbling — after all, we rely on our senses in everyday life and usually it works out fine. Shouldn’t that be good enough for philosophical arguments?

There’s a famous story in philosophy that starts out that way:

A great philosopher was speaking to a room full of his colleagues, trying to get them to bring their heads out of the clouds and realize that observation is reliable enough for most practical purposes. To illustrate his point, he looked above him and said “Look, I see the window above me! I see the panes of glass, and I see blue sky through them! There’s no need for me to be skeptical about things that I can see with my own eyes!”

But in fact, the window was a highly realistic painting.

The point is, don’t be overconfident in direct observation — your senses are not always reliable, and even when you think you’re making a direct observation, you’re really making inferences, which may or may not be correct.


 

IV. Quotes about Inference

Quote 1

The aim of scientific thought, then, is to apply past experience to new circumstances; the instrument is an observed uniformity in the course of events…it enables us to infer things that we have not seen from things that we have seen. (William Kingdon Clifford)

The philosopher William Kingdon Clifford was highly influential in British thinking on science, religion, and philosophy. (If you remember geometric algebra from math class, you can thank Clifford for it!) In this quote, he points out what many philosophers of science have observed — that science is based almost entirely on inductive inferences, with very few deductions at all. Notice that science, in Clifford’s view, “enables us to infer…from things that we have seen,” and compare that to what we learned about induction in section 2.

Quote 2

Inductive inference is the only process known to us by which essentially new knowledge comes into the world. (Sir Ronald Aylmer Fisher)

This quote comes from the mathematician and biologist Sir Ronald Fisher, who was arguably the most influential evolutionary biologist since Charles Darwin. He echoes Clifford’s point in more modern language, demonstrating that this line of thought has been consistently prominent in science throughout the last couple of centuries. Again, the point here is that deduction doesn’t teach us anything new, but only draws attention to some logical consequences of our knowledge. Induction, on the other hand, holds out the promise of new knowledge.

 

V. The History and Importance of Inference

As we saw in section 3, inference is an inherent part of observation. That means it’s as old as humanity itself — as long as our ancestors were observing their world, they made inferences about it. If they saw horse tracks in the mud, they could infer that a horse had passed that way. If one of their siblings made a disgusted face after eating some berries, they could infer that the berries didn’t taste very good. Indeed, inference is even older than humanity — inferences are made by animals, plants, singled celled organisms, and anything else with a sensory system. Of course, only humans and other animals with brains are capable of making conscious inferences or choosing to make one inference rather than another. And humans are undoubtedly the most sophisticated of all animals when it comes to this particular skill.

Because inference is such a natural part of how living things interact with their world, it’s no surprise that formal inference is one of the oldest and most important ideas in human philosophy. All three major philosophical traditions of the ancient world — India, China, and Greece — developed their own system and emphasized the importance of making good inferences.

In the information age, inferences have become more important than ever for science and technology. That’s because computers are essentially inference-drawing machines: the computer moves logically from one command to the next, “inferring” outputs from various inputs and programming.

Computers are exceptionally good at deduction, but not very good at induction — the opposite of human beings! It’s easy to give a computer a set of general rules and have it apply those rules to a given data set.

Example 1

We can give a computer the rules of arithmetic and have it apply them to the problem 347*12+9482/4

This is a problem that would take a long time to solve for all but the most extremely gifted humans. Induction, on the other hand, is easy for humans but hard for computers.

Example 2

Think back to how you learned the rules for what a “B” looks like. You looked at a bunch of B’s, in various fonts, colors, and shapes, and inductively inferred the general rules.

This turns out to be an extremely difficult task for computers. That’s why, when you visit certain websites, you have to look at a string of distorted letters and numbers and type them in to prove that you’re not a robot — this task is fairly easy for humans, but almost impossible for computers.

 

VI. Inference in Popular Culture

Example 1

“Here is a gentleman of the medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured: He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.” (Sherlock Holmes)

In the Sherlock Holmes stories (and the TV show Sherlock), the great detective is known for his “brilliant deductions.” He even has a book/website called “the Art of Deduction.” But take a look at the reasoning here. If you read closely, you’ll notice Holmes make lots and lots of inferences about who Watson is and where he came from. But none of them are deductions! They are all based on specific evidence (not general laws) and they are all probably true, rather than being logically airtight. For example, take the first inference: based on the premise that Watson is a medical type with the air of a military men, and infers that he must be an army doctor — but that’s only probably true. There are other logical possibilities, so can’t be a deduction.

Example 2

[SPOILER ALERT]

Harry Potter and the Prisoner of Azkaban has a surprising plot twist near the end: near the beginning of the book, we learn that the sinister Sirius Black has escaped from prison and is trying to find Harry. From all the evidence in the story, we assume that Black is a murderer bent on killing Harry. But it turns out that these inferences are faulty — the real murderer is Peter Pettigrew, long believed to be dead.

This is a common technique in novels, and is called a “red herring.” The author presents evidence that suggests a certain inference very strongly; but in the end, the evidence turns out to have been misleading, and this sets the reader up for a surprise ending.

 

Quiz

1.
My brother usually gets off work around 6; so he’ll probably get off around 6 today.
This is an example of:

a.

b.

c.

d.

2.
Which of these is NOT a form of inference discussed in the article?

a.

b.

c.

d.

3.
____ is easy for humans, but difficult for computers

a.

b.

c.

d.

4.
What is the relationship between observation and inference?

a.

b.

c.

d.