Sunday, May 18, 2014

Critical Thinking Short Cut

Introduction
Critical thinking sounds fancy but it's something most of us do everyday.  In fact, most of us are quite good at it...so long as we are properly motivated.   Unfortunately, we absolutely suck when the motivation is wrong--and I'm not talking about money.

What do I mean by all this talk of correct and incorrect motivation?  What I mean is that, when someone is trying to argue against our own cherished position and beliefs, most of us are very good at pointing out the problems with our opponent's arguments.  We are primarily motivated to want to be 'right' and to protect our most cherished beliefs from criticism and so, in such cases, our critical thinking skills are generally quite good.

We are absolutely horrible critical thinkers when an argument or evidence favors our existing cherished beliefs. Why? Because, as I said before, we are primarily motivated to be right and to protect our beliefs, thus, we will usually uncritically accept any argument or evidence that supports our position.  Any argument or evidence that serves our purpose is, ipso facto, good.  Consider:  When was the last time you were critical of an argument or evidence that supported your own position on an issue?

What's the moral of the story here?  Most of us already have a intuitive grasp of critical thinking, what messes it up is our motivation to be right and to protect pre-existing conclusions.  So, if we want to be good critical thinkers, we need to manipulate our motives; or at the very least be aware of their capacity for distortion.

What is Critical Thinking?
To understand what good critical thinking is, it helps to contrast it with poor critical thinking.  The way most people reason is that they look at the conclusion of an argument or the conclusion some evidence implies and assess whether that conclusion agrees with their pre-existing beliefs and positions.  If the argument/evidence agrees with or supports their pre-existing position then the argument/evidence is considered good.  If the converse is true, then the argument/evidence is considered defective.   To summarize the problem: Most people focus on whether they agree or disagree with a conclusion rather than on the quality of the argument/evidence.  This approach is not good critical thinking.

In critical thinking we don't care two hoots whether we agree or disagree with the conclusion:  all we are interested in is whether the argument or the evidence is good support the conclusion.  Critical thinking is mainly about two things:  (a)  standards of evidence (i.e., what constitutes good evidence?) and the logical relationship and relevance of premises of arguments to their respective conclusions (i.e., does the conclusion follow from the premises?).  That's all.  The end.  Good night.  (For convenience, I'll refer to both of these aspects as "quality of evidence/arguments").

The Secret that Professors Don't Want You to Know:
Good critical thinking is all about focusing on the quality of the arguments/evidence relative to conclusions but unfortunately our brains are hardwired to look at the conclusions relative to our pre-existing beliefs. Because we should really focus on the quality of arguments/evidence, we need a trick to overcome the tendency to focus on the conclusion.

The Ol' Switcheroo Version 1:  (a) If an argument or evidence supports your position, ask yourself if you'd find the argument/evidence compelling if the same quality of evidence/justification supported the opposite conclusion.  (b) If an argument/evidence is against your current position, ask yourself if you'd find the quality of evidence/justification compelling if it supported your position.


For example (a):  Suppose you think vaccines cause autism and to support your conclusion you cite the fact that your nephew has autism and he was vaccinated; therefore, vaccines cause autism.  To apply critical thinking special secret #1 we construct a similar argument but for the opposite conclusion:  E.g., I have a nephew and he was vaccinated and isn't autistic; therefore, vaccines don't cause autism.

If your original position was that vaccines cause autism, would this second argument cause you to change your position?  Nope, I doubt it would, and for good reason: a single case doesn't tell us anything about causal relations.  Notice that applying secret thinking sauce #1 allows us to focus on the quality on the evidence rather than on whether we like the conclusion. So, if the second argument fails as good support for the conclusion, so does the first, even though it supports your position.  Boom! goes the dynamite.

Lets try another example (b):  Suppose you are an anthropogenic climate change denier.  Someone argues against your cherished beliefs by saying 97% of climate scientists agree that human activity is responsible for climate change.  Your natural reaction is to discount this as an insignificant argument because it contradicts your pre-existing position.  Now apply critical thinking secret sauce #1 and ask yourself: If 97% of climate change scientists denied that human activity has any effect on the climate, would you consider this as good support for your position?

Lets try a moral example:  In the "homosexuality is bad vs homosexuality isn't bad" debate both sides often make appeals to what is or isn't natural behavior as justification for their position.  Lets apply critical thinking secret sauce to both sides to show why both justifications are weak: 

"Homosexuality is morally wrong because it's unnatural."  The justification here is that moral wrongness is a function of whether something is unnatural.    Now, applying the ol' switcheroo, we ask the person who takes this position: Supposing homosexuality were natural, would you then agree that homosexuality is morally permissible? They will likely answer, "no" thereby indicating that naturalness is a poor justification for moral permissibility. 

But it isn't just evangelical moralists that are using poor justifications for their claim.  Lets apply the same test to those who argue that homosexuality is morally permissible because it is natural for a certain percentage of the population to be gay (usually some sort of genetic argument is given).  Lets try applying the ol' switcheroo:  

Suppose scientists discover that there is no "gay gene" and that homosexual behavior is purely a matter of some combination of socialization and personal choice.  If this were the case, would proponents of the argument then say "welp, I guess homosexuality is morally wrong after all"?  Probably not.  And the reason is that whether a behavior is natural or not tells us nothing about that behaviors moral status.  

Whatever one's opinion on the moral status of homosexuality, the ol' switcheroo shows us that both positions cannot be supported through appeals to "naturalness".  That is, the quality of that particular justification is weak regardless of which conclusion we are sympathetic to.

The Ol' Switcheroo Version 2:  Sometimes issues are such that the simple switcheroo won't work too well in helping to focus our minds on the quality of arguments/evidence; so, we need a variation of the switcheroo to deal with those situations.  Here it is: (a*) If an argument/evidence supports your pre-existing position, ask yourself if a similar argument or evidence would be convincing to you in an different issue to which you are opposed.  (b*)  If an argument/evidence is against your cherished beliefs, ask yourself if a similar argument would be convincing in an issue where you are a proponent.  

Basically, in this version we're trying to generalize the principle that is being used to justify a conclusion then apply it to other cases to see if the principle is being applied consistently or (as is often the case) the principle is being used when it supports a conclusion we like but is being denied when it supports a conclusion we dislike.

Example (a*):  Suppose you think homeopathy works and that you are generally skeptical of conventional medicine.  To support homeopathy you cite a particular scientific study shows that 70% of subjects no longer had condition X after homeopathic treatment.  The study has a sample size of 10 and there's no control group.  Ask yourself, would such a study convince you that a new conventional medication was effective for condition X?

Of course not.  A sample size of ten is way too small to conclude anything of consequence and the lack of a control group makes a study, especially of this size, essentially worthless.  If the evidence in the second case wouldn't be good support the conclusion, then the same applies to the first case.  Critical thinking secret v.2 allows you to see why the evidence you've provided isn't good.

Example (b*)  Suppose again your pre-existing position is that global climate change is not caused by human activity.  Someone points out that 97% of climate scientists think the opposite: that global climate change is attributable to human activity.  Now apply critical thinking secret sauce v.2:  pick an issue where you have a pro position or even one where you don't have position: suppose it's that it's consistent with the 2nd amendment that people should be able to own guns free of restrictions.  Ask yourself: if 97% of all constitutional experts agreed that unrestricted gun ownership is consistent with the 2nd amendment, would you consider this to be a good reason in favor of your position?  If yes, then you have to also allow that it's a good reason in the first case too.

Monday, May 5, 2014

Lecture 16A: Slippery Slope

Business
1.  Go over homework:  Be sure to use standard form for analogies.
     a.  Animal studies:  http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1000245
     b.  Taxation vs Slavery  John Locke
     c.  Wallet vs Cellphone

Slippery Slope Arguments






Papa Bear on Gay Marriage



"If you don't believe in (a) god, then you have no basis for morality, if you have no basis for morality then there's nothing stopping you from killing and stealing."

Formal Structure:
P1: If A then B.
P2: If B, then C.
P3: If C then D.
P4: If D, then E.
P5: If E, then F.
P6:  F is a good thing
P7: A is true.
C:  therefore we should do A. (Positive conclusion).

P6*:  F is a bad thing
C*: therefore we should't do A. (Negative conclusion).

(We can also condense P1-P5 as a single compound premise: If A then B, if B then C, If C then D,...)

Evaluating Slippery Slope Arguments:
1.  How strong and/or true are the causal connections in the chain of reasoning?
2.  Generally, the longer the causal chain, the less likely the argument is to be strong.
3.  Is the conclusion genuinely desirable/undesirable?



Lesson 16A: Slippery Slope Arguments

Introduction
In the previous posts we looked at argument schemes that are typically employed in factual matters: generalizations, polls, general causal reasoning, particular causal reasoning, and the argument from ignorance.  In this next section we'll look at common argument schemes used in normative (i.e., having to do with values) arguments.  Check.  it. aus...

Slippery-Slope Argument
A slippery slope argument is one where it is proposed that an initial action will initiate a causal cascade ending with a state of affairs that is universally desirable or undesirable.  The implication is that we should (or should not) do the initial action/policy because the cascade of events will necessarily occur.  

A contemporary example of a negative version of the slippery slope argument comes from arguments against gay marriage equality.  Some opponents argue that if same sex couples are allowed to marry, then there will eventually be no good reasons against people marrying animals and so society will have to permit this too. 

(Yes, people actually make this argument...can I marry my horse? )
Here's a good clip were the slippery slope argument is mentioned explicitly:  Video of O'Reilly factor slippery slope argument. Start at 2:00

A positive version of the slippery slope argument might be something like a Libertarian argument (over-simplified version):  We should treat the principle of self-ownership as the primary governing principle, if we do, then you will remove taxation and government, then the market will cease to be distorted and people will act in their own self-interest, and people acting in their own self-interest will pull themselves up by their own bootstraps, a society of self-pulled up people will have relatively few social problems thus eliminating many of the existing ones. (Note:  We could do an over-simplified version of just about any political philosophy and show it to be weak)




So, why are these arguments not very strong?  To figure it out, lets look at the underlying structure of a slippery slope argument.  Recall that a slippery slope argument is one where it is supposed that one initial event or policy will set off an necessary unbroken sequence of causal events. 

If we formalize it, it will look like this:  

P1:  If A then B.  
P2:  If B, then C.  
P3:  If C then D.  
P4:  If D, then E.  
P5:  If E, then F. 
P6:  A is true.
P7:  F is a good/bad thing.
C   F is a good thing, therefore we should do A. (Positive conclusion).
C* F is a bad thing, therefore we should't do A.  (Negative conclusion). 

(We can also condense P1-P5 as a single compound premise: If A then B, if B then C, If C then D,...)

If we think back to the lecture/post on principles of general causal reasoning we will recall that it takes quite a bit of evidence to say that even a single causal argument is good (e.g. If A then B).  As you might imagine, the longer the causal chain gets, the more difficult it will become to ascertain that the links along the way are necessarily true and not open to other possible outcomes.  

Returning to our examples, in the first case, one of the causal elements has to do with the equivalence between reasons against gay marriage and reasons against animal marriage.  It doesn't take much imagination to come up arguments for why the two types of prohibitive reasons aren't the same (capacity for mutual informed consent for starters...).  Showing that there is a relevant distinction between the types of reasons causes a break in the causal chain, thereby rendering the argument weak.  

In our-simplified version of the Libertarian argument relies on a long causal chain that begins with the primacy of the self ownership principle and reduced taxation and government and ends with a decrease in prevailing social problems.  Along the way there are may suspect causal claims that individually might not stand too much scrutiny--especially since many of the claims are hotly debated by experts in the respective fields.  Since a chain is only as strong as its weakest link, this will have a detrimental effect on the overall strength (logical force) of the conclusion. 

Upshot:  So, what's the overall status of slippery slope arguments?  Just like many argument schemes they can be both strong or weak depending on their constituent parts.  In the case of slippery slope arguments, a strong one will have highly plausible causal claims all linked together, culminating in a glorious well-supported conclusion about what we should or should not do.   Conversely, a weak slippery slope argument will have one or more weak causal claims in its implied premises.

Wednesday, April 30, 2014

Sample B of Option 1

Introduction
Welcome to Part 2 of an investigation of vaccine herd immunity through the concepts of critical thinking.  The purpose of these blog entries is two-fold.  One is to explore the controversy over the legitimacy of herd immunity and the second is to learn central concepts in critical thinking.  Essentially, these posts are an exercise in applied critical thinking.  

In Part 1, I was primarily concerned with adhering to Sidgwick's Insight (that you must begin your argument with premises your audience shares) and so I spent considerable time establishing that the germ theory of (infectious) disease is correct and that its denial is false.  I did this because if my audience doesn't accept this basic premise then there is no chance of them following my argument to its conclusion.  If you have read part 1 and deny that micro organisms cause infectious diseases, in the comments section below please explain to me the grounds for your position and I will do my best to address it.

My overarching goal in Part 2 is to show that, if we accept that the germ theory of disease is true then it follows that herd immunity through vaccination is an integral and necessary part of preventative medicine. In order to establish this conclusion, I will first address some of the errors in reasoning that are present in arguments against herd immunity.  Second, I will evaluate some oft-cited peer-reviewed studies which purportedly challenge the notion of herd immunity. Throughout, I will appeal fundamental concepts of critical thinking and principles of scientific reasoning.

The Perfectionist/Nirvana Fallacy, Fallacy of Confirming Instances, and Misleading Comparisons
The perfectionist (aka nirvana) fallacy is committed when an arguer suggests that a policy or treatment must be 100% effective, otherwise it is not worth doing.  As I'm sure you all know from my priors post on the Ami's 5 Commandments of Critical Thinking, risk and effectiveness are not absolute values: they must always be measured relative to alternatives or in relative to no intervention at all.  Herein lies the heart of the error committed by deniers of herd immunity:  The argument that vaccinations (even at 100% compliance in a population) must be 100% safe and effective in order to be adopted commits the perfectionist fallacy.  Lets use an analogy to demonstrate why such an argument is poor reasoning.

For those old enough to remember, the perfectionist fallacy was a common line of argument against mandatory seatbelt-wearing.  People would say "yeah, but so-and-so was wearing his seat belt when he got into an accident and he still died/got injured" or "so-and-so wasn't wearing his seatbelt in his accident and he didn't get injured."  I think the seat belt analogy is a good one:

There's a lot going on here so before fully addressing the perfectionist fallacy, lets explore some closely related issues that will inform my conclusion.  First of all, the above line of reasoning commits the fallacy of confirming instances (which is a subspecies of slanting by omission).  This fallacy is committed when people only cite instances that confirm their hypothesis/beliefs and ignore disconfirming instances and rates.

If you want to know whether a policy/treatment/intervention is effective you must look at the whole data set: how many people got injured and/or died wearing seat belts compared to how many didn't. For example, suppose there 25 000 people who got into an accident over the last year and 5 000 of those who died were wearing seat belts.  If someone were to say "ah ha! 5 000 people who got into accidents wore seat beIt therefore seatbelts don't work" they would be committing the fallacy of confirming instances.   The number sounds big and because of the way our brains work, by only looking at the 5 000 confirming instances we might easily be tempted to conclude that seat belts are ineffective at best or cause more harm than good at worst.

But we aren't done: we need to look at the entire data set.  Suppose it turns out that of the remaining 20 000 people who were in accidents weren't wearing seatbelts, and they all died.  Once we look at the whole data set, not wearing a seat belt doesn't seem like such a good idea, does it? (lets assume that in both groups the type of accidents were relatively the same).

Now complete the analogy with vaccines.  Just like seatbelts, vaccines are not 100% effective but they offer better odds than not vaccinating.  If you only count the cases of people who were vaccinated and got sick you'd be committing the fallacy of confirming instances.  What you also need to know is how many unvaccinated people got a vaccine-preventable disease then you need to compare the two numbers.

But wait! There's more! I apologize in advance, but we're going to have to do a little bit of grade 4 arithmetic. The absolute numbers give us one piece of the picture, but not all of it. We also need to know something about rates.  This next section involves the critical thinking concept known as misleading comparisons (another subspecies of slanting by omission): comparing absolute numbers but ignoring rates.

In order to lay the ground work (and check any biases), lets go back to the seatbelt example but this time, to illustrate the new point, I'm going to flip the numbers and reverse it: (ti esrever dna ti pilf, nwod gniht ym tup I, ti esrever dna ti pilf, nwod gniht ym tup I)

Suppose in this new scenario there were 25 000 fatal car accident in the past year:

  • 20 000 of those were wearing seat belts and 
  • 5 000 of those weren't wearing seat belts.

Well, well, well.  It doesn't seem like seat belts are such a good idea any more...just look at the difference in numbers! (Oh, snap!)

This scenario is just like with vaccines.  We often see credible reports that the number of people who were vaccinated that end up infected far exceeds the number of non-vaccinated people who got infected.  Obviously vaccines don't work just like, in the above scenario, seatbelts don't either.

As you might have guessed there is a very basic math error going on here.  Can you spot it? Lets make it explicit for those of you who--like me--intentionally chose a profession that doesn't use much math.

Suppose that the total population we are evaluating is 500 000 people.  Of those people, 90% (450 000) wear a seatbelt when driving and 10% (50 000) don't.  Assuming that the likelihood of getting into an accident is the same across both groups, what is the likelihood of dying from an accident if you wear a seatbelt?  

  • 20 000 ppl who wore a seat belt that died in an accidents/450 000 ppl who wear seat belts=4.44%  
What is the likelihood of you dying from an accident if you don't wear a seatbelt?

  • 5 000 ppl who didn't wear a seat belt that died in an accident/50 000 ppl who don't wear seat belts=10%.

As you can see, the absolute numbers don't tell the whole story.  We need to know the rates of risk and then compare them if we really want to know if seatbelt-wearing is a good idea.  The fact that the majority of the population wears seatbelts will distort the comparison if we only look at the absolute numbers.

The percentages measure the rates of risk (i.e., probability of infection/death).  If I wear a seat belt, there is a 4.44% chance that I could die in an accident.  If I don't wear a seat belt, there is a 10% chance I could die in an accident.  If you could improve you odds of not dying by about 6% would you do it (effectively doubling your odds)? Would you do it for your child?  I would.  What would you think about a parent that didn't do this for their child? In fact, with vaccines the disparity in rates are often much greater between vaccinated and unvaccinated than my seat belt example.  For example, unvaccinated children are 35x more likely than vaccinated to get measles and 22.8-fold increased risk of pertussis vs vaccinated children.

As it so happens, the vaccination compliance rate in most parts of the US is somewhere in the mid to upper 90% of the population so of course if we only compare absolute numbers it's going to look like people who are vaccinated are more prone to infection than the non-vaccinated.  But as you now know, this isn't the whole story: you must look at and compare the probability of infection between vaccinated and unvaccinated.  Don't be fooled by misleading comparisons!

Back to Reality, Oh There Goes Gravity! Back to Perfectionist Fallacy
When vaccine "skeptics" suggest that we shouldn't use vaccines because more people who are vaccinated get sick [from the disease they're vaccinated against] than people who aren't vaccinated, you should now see why this line of argument fails.  What matters is relative risk between vaccinated and unvaccinated.  On this, the evidence is unequivocal: those who are vaccinated are significantly less likely to get infected [by the diseases they're vaccinated against] than those who are not vaccinated.

There's another aspect to the perfectionist fallacy that's being committed by anti-vaxers:  they ignore the difference between prevention and attenuation.  Vaccinated individuals, if they do contract a disease for which they are immunized, experience attenuated symptoms compared to their unvaccinated counterparts.   Again, it ain't perfect but it's better than not being vaccinated.


Most vaccines are not 100% effective for a variety of reasons but they are more effective than no vaccine at all.  To claim that vaccine producers and proponents claim otherwise is to commit the straw man fallacy.  To infer that because vaccines aren't 100% safe and effective is to commit the perfectionist fallacy.  Either way, you're committing a fallacy.  

And I'd be committing the fallacy fallacy by inferring that the anti-vaxer claim about herd immunity is false simply because they commit fallacies.  Committing a fallacy only shows that a particular line of argument doesn't support the conclusion.  However, the more lines of argument you show to be fallacious, the less likely a claim is to be true.  Fallacy-talk aside, what we really need to look at is the evidence..

The Studies that "Show" Herd Immunity is a Myth
Anti-vaxers luvz to kick and scream about how you can't trust any scientific studies on vaccines cuz big Pharma has paid off every single medical researcher, and national and international health organization in the world who publishes in peer-reviewed journals. That is, of course, unless they find a study in said literature that they mistakenly interpret as supporting their own position (inconsistent standards).  Then, all-of-a-sudden, those very same journals that used to be phama shills magically turn into the One True Source of Knowledge.  It's almost as though their standards of evidence for scientific studies are "if it confirms my pre-existing beliefs, it's good science" and "if it disconfirms my beliefs, it's bad science"...

Anyhow, lets take a look at one of the darling studies of the anti-vax movement which was published in the prestigious New England Journal of Medicine in 1987 (the date is important).  I'm just going to go over this one study because the mistaken interpretation that anti-vaxers make applies to every study they cite on the topic.

First of all, why do anti-vaxers love this study so much? Well, just look at the title:

Measles Outbreak in a Fully Immunized Secondary-School Population


Ah! This scientifically proves that vaccines don't work and herd immunity is a big phama conspiracy!  Obviously, we needn't even read the abstract.  The title of the study is all we need to know.

Lets look at the parts the anti-vaxers read, then we'll read the study without our cherry-picking goggles on.  Ready?  Here it is the anti-vax reading:

"We conclude that outbreaks of measles can occur in secondary schools, even when more than 99 percent of the students have been vaccinated and more than 95 percent are immune."

OMG! The anti-vaxers are right!  Herd immunity is a phama lie!  It doesn't work! (Perfectionist fallacy) 

Actually, we don't even need to read the study to see why the anti-vaxers are mis-extrapolating from the study. Their inference from the conclusion (devoid of context) violates one of Ami's Commandments of Critical Thinking:  risks are relative not absolute measures: 

So, yes, some of the vaccinated population got measles 14/1806=0.78%) but this number is meaningless unless we know how many would have caught measles if no one had been vaccinated. Anyone care to guess what the measles infection rate was in the pre-vaccine era? 20%? 30%? Keep going...it's 90%!

Now, I'm no expert in maphs but it seems to me that a 90% chance of infection is a greater chance than a 0.78% chance of infection.  Uh, herd immunity doesn't work?  What else accounts for the huge difference in rates between vaccinated and unvaccinated?

Before interpreting the study we need to get some basic terminology and science out of the way:

  • Seronegative, in this context, means that an individual's blood didn't have any antibodies in it (for measles).
  • Seropositive...meh, you can figure this out.
  • How vaccines are supposed to work (cartoon version).  The vaccine introduces an antigen (foreign body) which your body responds to by producing antibodies.  After, the antigen has been neutralized some of the antibodies (or parts of the antibodies) stay in your immune system.  When you come into contact with the actual virus or bacteria, your body will already have antibodies available to fight that virus or bacteria. Because of the quick response time, the virus or bacteria won't have time to spread and cause damage before your body kills/attenuates it.  
  • Some people don't produce antibodies in response to some vaccines.  These are the people who don't develop immunity.  If they don't develop the antibodies, they are seronegative.  If they do, they are seropositive. 

Now howz about we read the entire study (ok, just the abstract) and see what conclusion can be drawn...Here's the abstract (it's all we really need):

An outbreak of measles occurred among adolescents in Corpus Christi, Texas, in the spring of 1985, even though vaccination requirements for school attendance had been thoroughly enforced. Serum samples from 1806 students at two secondary schools were obtained eight days after the onset of the first case. Only 4.1 percent of these students (74 of 1806) lacked detectable antibody to measles according to enzymelinked immunosorbent assay, and more than 99 percent had records of vaccination with live measles vaccine. Stratified analysis showed that the number of doses of vaccine received was the most important predictor of antibody response. Ninety-five percent confidence intervals of seronegative rates were 0 to 3.3 percent for students who had received two prior doses of vaccine, as compared with 3.6 to 6.8 percent for students who had received only a single dose. After the survey, none of the 1732 seropositive students contracted measles. Fourteen of 74 seronegative students, all of whom had been vaccinated, contracted measles. In addition, three seronegative students seroconverted without experiencing any symptoms.

Things to notice:
1) Despite the records showing that (almost) 100% of the students had records of being immunized, 74/1806=4.1% of the students were seronegative (i.e., no measles anti-bodies detected).  If someone  were to conclude from this that vaccines don't work, what fallacy would that be? (You should know this one by now).  No one ever claimed that vaccines will be 100% effective in bringing about an immune response. 95.9% response rate is nothing to sneeze at.

2) Of the students that had only had a single dose measles shot, 3.6% to 6.8% of them were seronegative.  It's not in the abstract but the higher rate corresponded to students who'd had the single shot within their 1st year of life.  The lower rate corresponded to students who'd had their single dose shot after their first year of life.  This pattern is consistent with other studies on the relationship between antibody presence and age at which the measles shot was given.  Should we conclude from this that the measles vaccine doesn't work? Nope.  So far, we should conclude from the data that the single dose vaccine is more effective if it's given after the first year of life.  Also, a 6.8% failure rate is better than 90% failure.   (But a 90% failure is natural!)

3) Of the students who'd received two doses, 0-3.3% of them were seronegative.  Consistent with the above data, of the 2-shot group, the 3.3% group were those who had their first shot before the age of one.  Despite this, 3.3% is still lower than either of the single vaccine groups.  Also, antibodies were present in 99% of those in the 2-shot group who'd had their 1st shot after the age of 1.

4)  None of the seropositive students contracted measles.  No explanation needed (I hope).

So, what is the conclusion here?  
Is the conclusion that vaccines don't work?  Nope.  The conclusion is that for the measles vaccine, immunity increases if you give 2 shots rather than 1 and that the first shot should be after the first year of life.

And guess what?  Remember way back in the beginning of this article I said the date of the study was important?  Guess why?  Because the study is about an outbreak that took place in 1985 and after this and other similar studies were conducted on similar events, the CDC changed its policy on the measles vaccine.  Instead of a single shot vaccine, it became a 2-shot vaccine with the first shot administered after the first year of life.  This, of course, is the correct conclusion from the data.  Not that vaccines don't work.   

Guess what happened after the new vaccine schedule was introduced?  Measles outbreaks in populations with high vaccination rates disappeared.  

Here's a graphic of the distribution of vaccinated vs unvaccinated for recent outbreaks of measles:
What conclusion follows from the data?

Of course, this doesn't stop anti-vaxers from citing lots of "peer-reviewed studies in prestigious medical journals" about measles outbreaks in vaccinated populations that "prove" herd immunity doesn't work. Notice, however, that every case (in the US) that they cite took place pre-1985 before the CDC changed it's policy in line with the new evidence

Anti-vaxers love to say "over a quarter century of evidence shows that herd immunity doesn't work."  This is what we call slanting and distorting by omission.  Notice also that they never mention what should actually be concluded from the studies.  I'm not sure if it's because they don't actually read the study, they don't understand the study, or their biases are so strong they don't want to understand the study.  That's one for the psychologists to figure out...

One final point.  Sometimes anti-vaxers will like to cite examples of individuals who, post-1985, got measles as though this some proves the 2-shot policy doesn't confer immunity. Can you spot the reasoning error?  

Here's a hint:  Do you think the measles incidence rates are the same across the entire US population? Which demographic do you think is occasionally catches measles? (Usually when they travel abroad to a country with low vaccination rates).  

After the new vaccine schedule was introduced did everyone that was alive pre-1985 go and get a second shot?  Nope.  A large portion of the population is still in the single-shot category.  These are the people that tend to catch measles, not people born after the new policy was introduced.

Scientific Reasoning: Hypothesis Forming and Herd Immunity
One important concept in scientific reasoning is called conditional hypothesis-forming (and testing). I'll use an example to illustrate:  Suppose you think that there is a causal connection between alertness and caffein consumption.  You have a preliminary hypothesis:  drinking coffee causes alertness.  To test the hypothesis you form a conditional hypothesis.  In this case, it will be "if I drink coffee then I will feel alert."  Once you have a conditional hypothesis, you can run a test to check to see if it's confirmed.

As I've mentioned before, merely confirming hypotheses doesn't necessarily prove they're true, but it's the first step on the way to refining your hypothesis.  In our example, if I drink decaf coffee, the hypothesis will be falsified.  And if I drink regular coffee it won't be. Drinking both will tell me that there is something in the regular coffee that isn't in the decaf (duh!) which causes alertness.  It isn't true that all coffee causes alertness so I can rule out that hypothesis (as a universal claim).  

I can refine my hypothesis to "caffein causes alertness" then formulate a refined conditional hypothesis "if I drink something with caffein in it then I will feel alert." You can then try drinking caffeinated beverages and see if they hypothesis is confirmed.  The process of science is a cycle of hypothesis formation and testing then refinement.

Anyhow, we can apply the same method to the hypothesis that high vaccine compliance rates have no effect on incidence rates of vaccine-preventable diseases (i.e., herd immunity). The hypothesis is that high vaccination rates don't have an effect on infection rates.  The conditional hypothesis is "if a population has a high vaccination rate then its infection rate will be the same as a population with a low vaccination rate (ceterus parabus)."  Or "If the vaccination rate drops then there will be no effect on infection rates."  

[Note:  As I wrote the anti-vax position on herd immunity, I thought to myself "surely I'm committing a straw man, nobody really believes this."  Alas, I was wrong...12]

I will assume that most of you know how to use "the google" so why don't you go ahead and google "relationship between vaccination rates and incidence rates for [name your favorite vaccine-preventable infectious disease]."  Well?  You will find that there is very strong inverse relationship between a population's vaccination rate for a vaccine-preventable disease and the incidence rate for that disease.   

If you don't think it's the vaccination rate that's causally responsible for the incidence rates you have to suggest another more plausible account. What is it?  Hand-washing? Diet? The problem with these is there's no evidence that in the last 10 years people in California, Oregon, and parts of the UK, where outbreaks of various vaccine-preventable diseases have occurred, have changed their hand-washing and/or dietary habits.  They have however changed their vaccine compliance rates...negatively.  Hmmm...

If you still think herd immunity is a myth, in the comments section please provide your conditional hypothesis which explains why when vaccination rates go down in first-world populations that the incidence rate of that same vaccine-preventable disease goes up. What is your proposed causal mechanism?  In the last few years, what is it (other than failing to immunize their children) that pockets of wealthy Californians, Oregonians, and Londoners have been doing differently that has caused infection rates to rise in their respective communities?

Sample A of Option 1

Here's a sample I made of one way you could do option 1:
Sigdwick's Insight (Call Me Mr. Busdriver Cuz I'm Gonna Take You to School)
Before we get started, we'd do well to establish a baseline of common beliefs.  This is what I have come to call "Sidgwick's Insight".  I won't bore you with why I call it that but I will give you a brief explanation of the concept and why it is absolutely vital to critical thinking:

Imagine you're a bus driver (fun, I know) and you want to get some people to a particular destination. Here comes the really dumb question: If the passengers never get on the bus, can you get them to the destination?

An argument with someone that has an opposing view point is very similar to the above scenario.  The destination is the your conclusion. Just as you can't get your passengers to their destination if they never get on your bus, you can never lead an opponent to your conclusion if they never accept your premises.  Conclusions follow from premises. Sidgwick's insight is that you must always begin your argument with premises both you and your audience share.

Once your passengers are on the bus, all sorts of things can go wrong.  You can run out of gas, you can disagree about whether your particular route will get you to the destination, or after a while the passengers can simply refuse to continue on the trip and get off the bus. I'm stretching the analogy, but you get the idea.  

The main point is simply that your chances of leading an opposing audience to your conclusion go up dramatically if you begin with shared premises.  A good arguer shows a hostile audience that his--not their--conclusion follows from the evidence that they already accept.

Germ Theory Denial, Straw Men, Inconsistency and Falsifiability
In the spirit of Sidwick's insight, I need to find some common ground with my anti-vaccine audience. Because the anti-vaccine community runs from the absolutely nutty to the intelligent-but-misinformed and I don't know exactly where my audience sits on this spectrum, I'm going to start by showing why the nuttiest view fails so I can discount it and begin with a premise that everyone will share with me.  I also want to address the nuttiest position because I want to avoid committing a straw man.

straw man argument is committed when you distort your opponent's position such that it is a caricature of his actual position.  It's important not to commit this fallacy because by defeating a weaker version of an argument, you leave the door open for counter-replies (E.g., "that's not what I meant"...) whereas if you can defeat the strongest and most charitable version of his position, there is little chance of a rebuttal. 

The premise I hope to begin with is that germ theory is correct, so lets start there: In super-simplified form, germ theory is the idea that microorganisms (bacteria, viruses, fungi, protist, or prion) cause infectious diseases.  To be clear, germ theory does not say that all diseases are caused by microorganisms, only the infectious ones are.  To suggest that germ theory says otherwise would be to commit the straw man fallacy (learning's fun!).

Now, there are some loons out there that deny germ theory (that was an ad hominem, for anyone keeping track!).  I'm not going to spend too much time on people who hold this view but I'll discuss their beliefs to illustrate another critical thinking principle: logical inconsistency.

One issue you'll come up against while evaluating arguments is determining when you should or should not accept a premise.  This can be particularly difficult when it is about a topic you're not too familiar with.  One simple rule is that you should reject any argument that has mutually exclusive premises; that is, two or more logically inconsistent premises. With this rule, you don't even need to know anything about the topic.  If the premises are logically incompatible, you can reject the argument as a whole.

The loony end of the anti-vax movement provides a good example of logical inconsistency: Many in the loony camp deny germ theory.  So far no inconsistency, just a blatant denial of almost 200 years of science.  However, these same people will often say that the massive drop off and virtual elimination of vaccine-preventible (i.e. infectious) diseases wasn't caused by vaccines--it was caused by better diets, hygiene and sanitation.  Did you spot the inconsistency?

If germs don't cause infectious diseases, then why would sanitation and hygiene have any effect on their transmission and rates of prevalence?  This is what we call a logical inconsistency.  Now, to be fair, simply because we've shown an argument to be inconsistent, it doesn't follow that the conclusion is false, it only means that that particular line of argument won't work to support the conclusion. Nevertheless, eliminating a line of support for a conclusion diminishes the likelihood of its truth.

Another good heuristic for evaluating a position is its falsifiability.  Falsifiability means that there is some way to set up an experiment or test to show that a position is false. For example, the hypothesis that vaccines do significantly diminish rates of infection and transmission is falsifiable.  

I could conduct an experiment or look at historical data to test the hypothesis:  I could look at rates of infection and transmission for a particular disease in a population before a vaccine was developed and then I could look at rates of infection and transmission after the vaccine had been administered to the population.  I could also look at what happens to rates of infection and transmission when immunization rates fall.  If there is a significant difference, I can infer a causal relation.  If there is no significant difference, I can affirm that it is probably false that vaccines prevent infection and transmission of a particular disease;  that is to say, the hypothesis has been falsified.  Anyhow, if a hypothesis isn't falsifiable (i.e., there's no possible way to prove it false) then it's weak.  

[Note: I'm going to gloss over the philosophical issue involving the distinction between "in principle" and "in practice" falsifiability as well as the philosophical problems surrounding the falsifiability criterion.  My claim is only that it is a good heuristic.]

In light of the notion of falsifiability, let's evaluate some "alternative" theories to germ theory. There are people that believe that disease isn't caused by germs but by poor alignment of your spine, chi, chakras, too much yin/yang and/or too much stress.  As with most positions, there are varieties:  Some say that the germ theory is completely wrong, others hold a hybrid view that, yes germs can cause diseases, but only in people that don't adhere to a particular magic diet, lifestyle, philosophy, attitude, world-view, etc...

In other words, if people would simply change their lifestyle, worldview, eat organic bugabuga berries, pay for quantum healing sessions, etc...they'd never get an infectious disease because their immune system would be so strong.  It's only because [insert name of your favorite "toxin" or psychological ailment attributed to modern society] that people's immune systems are compromised.  You might think this is a straw man, but alas, it is not.  A little time on any "natural healing" website will disabuse you of your naiveté.

So, where does falsifiability come into all of this alt-germ theory?  The purveyors of these schools of thought generally present their hypotheses in non-falsifiable forms.  Here's how the conversation typically goes:  They make their claim that "the one secret THEY (i.e. the establishment) don't want you to know" [choose your favorite alt-med treatment and/or new-age "philosophy"] will prevent you from ever being infected by an infectious disease (and especially not cancer). You point to an example of someone who gets the alt-med treatment and/or adheres to the new-age "philosophy" yet caught (or died from) an infectious disease.  They respond by saying, "ah, they weren't doing it quite right" (maybe it was the gluten?) but if they had, done it right, they never would have gotten the disease.

No matter what counter-example (attempt to falsify their hypothesis) you point to, they will say that the person wasn't truly doing it right (e.g., they ate GMO corn by accident one day).  They never allow any counter-examples.  The hypothesis is unfalsifiable--in practice--and also commits (bonus!) The No True Scotsman Fallacy.

So, how do we deal with this?  As you might have guessed, I have a solution.  It's called the "put your life where your mouth is" test.  Before presenting it I'd like to say that I don't believe that, when push comes to shove, people really believe half the nonsense they say they do.  Here's the solution:  Ask the proponent of alt-med treatment X/new-age "philosophy" Y to undergo whatever treatment/practice/therapy/"philosophy" they are recommending.  They can do whatever they think makes them perfectly healthy and immune to infectious diseases.  Eat organic acai berries, do yoga, mediate with Tibetan monks, do acupuncture, get adjusted at the chiropractor's, uncover their repressed emotions...whatever.  Then ask them if you can inject them with the HIV virus.

If they hold either of the views that (a) micro-organisms don't cause disease or (b) micro-organisms-only-cause-disease-if you-don't-buy-what-I'm-selling then they should be happy to oblige. Of course, only the looniest of the loons will oblige..and if they do, ethical considerations dictate that winning the argument should come second to causing someone's death through their own gullibility.

Ok, so maybe the HIV virus is a bit much.  Maybe ask them to rub an HPV-covered swab on their genitalia.  I'm sure they'll be happy to show you how well their treatment works.   Probably they'll just get reiki or will simply will themselves back to health through positive thoughts.  Please put it on video.

One last point regarding consequences of non-falsifiability:  When the anti-vaxer/proponent of alt-germ uses the ad hoc strategy of "ah ha!  but they didn't do it right" we should consider that public health policy should take into account how actual people, living in this world, will behave, not how they might behave if they were perfectly rational and living in a perfect world.  Regardless of its efficacy, if a practice being preached is so unattainable, it is not practical in a world of creatures who regularly act against their own self-interest--especially when it comes to their own health.

The False Dilemma
The false dilemma fallacy is committed when an arguer presents two options that aren't mutually exclusive but presents them as though they are mutually exclusive.  A (very) moderate anti-vaxer might accuse me of committing this fallacy.  But I would not consider such a position to be that of an anti-vaxer:  Most anti-vaxers either believe that vaccines cause more health problems than they prevent or that vaccines have negligible efficacy compared to whatever treatment/lifestyle they're recommending (correct me if I'm wrong). 

It is the anti-vaxer that commits the false dilemma:  Either you vaccinate and get sick OR do the treatment/live the lifestyle they're selling and you won't get sick:

But this is to present a false dilemma:  Of course a healthy diet, low-stress and active lifestyle is going to make you less susceptible and more resistant to disease than if you have a poor diet, high-stress, sedentary lifestyle.  Nobody is disputing this (to suggest they are would be to commit a straw man).  Aaaaaaaaaaaand, if you vaccinate as well, you will decrease even more significantly your susceptibility to infectious disease (up to 22x vs unvaccinated depending on the disease: Glanz J, et al “Parental refusal of pertussis vaccination is associated with an increased risk of pertussis infection in children” Pediatrics 2009; DOI: 10.1542/peds.2008-2150.).

Back to Sidgwick and End of Part 1
I have learned from my formal and informal study of the psychology of reasoning and belief that deeply-held views are most often recalcitrant to evidence and reason, no matter how compelling.  I don't really expect to change anyone's mind at this point.  But, if we're going to discuss the question of herd immunity as it pertains to vaccines, we need some premises that are held in common.  The purpose of the above section was to try to establish at least one of those premises: that germs (micoorganisms) cause infectious diseases.

If you find fault with how I have shown competing views to be improbable, please leave a comment in the comments section and I will do my best to address it.

Lecture 15B: Arguments from Analogy Part 2

Business:
1. http://rbutr.com/
2. right click/ control click on images

Counter-Arguments to Analogies
In addition to evaluating the 4 criteria from the previous lesson (total number of relevant similarities vs total number of relevant dissimilarities, total number of instances, diversity of cases), there are a few other ways to directly criticize arguments from analogy.

A.  Disanalogy:  If you can find more relevant dissimilarities than relevant similarities between the sample and the target then you have shown the analogy to be weak.

B.  Logical Counter-Example (AKA Counter-Analogy):  Another way to undermine an analogy is to show that the shared properties (w, x, y) between the sample and the target don't necessarily imply the inferred property (z).  For example, in the teleological argument complexity is supposed to be predictive of having a designer.  But we can show that this isn't always necessarily true: snowflakes and crystals have complex structures yet are the result of simple natural laws.  Coming up with counter-analogies undermines the strength of the relationship between the properties held in common (w, x, y) and the inferred property (z).

C.  Unintended Consequences: You can undermine an analogy by showing that its logical consequences entail a conclusion that is undesirable to the person who is making the original argument. For example, in the teleological argument we see that the more complex an object is the greater the number of designers/builders it has (think of how many people it takes to make a car/computer/skyscraper).  So, the logical consequence of the analogy is that the universe must have many designers and creators not just one as William Paley hoped to prove.

D.  Measurement errors/straw man alert:  Does the sample really have the properties being ascribed to it?  Does the target really have the properties being ascribed to it?  If either the sample or the target don't actually have the properties being ascribed to them, then there is no analogy.  In the political domain, this is often the case when an opponents policy is being criticized because the opponent's position is often presented as a straw man.


Interesting and Controversial Analogies




A.  We should not blame the media for deteriorating moral standards. Newspapers and TV are like weather reporters who report the facts. We do not blame weather reports for telling us that the weather is bad. http://philosophy.hku.hk/think/arg/analogy.php

B.  Democracy does not work in a family. Parents should have the ultimate say because they are wiser and their children do not know what is best for themselves. Similarly the best form of government for a society is not a democractic one but one where the leaders are more like parents.

C.  "Wives, submit yourselves to your own husbands, as unto the Lord. For the husband is the head of the wife, even as Christ is the head of the church." - St. Paul, Ephesians 5:22.
D.  In the early 17th century, astronomer Francesco Sizi argued that there are only seven planets: "There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many similar phenomena of nature such as the seven metals, etc., which it were tedious to enumerate, we gather that the number of planets is necessarily seven."

E.  Trolly problems and surgeon.  Disanalogy, surgeon=unintended consequences; (a) care for the one, let 5 die (b) transplant.

F.  Taxation is just like slavery.  You're forcing a man to give up his property and to use it for things he might not otherwise support.  If slavery is wrong, so is taxation.

G.  Gay rights are just an extension of the civil rights movement.  The issues for the gay community are the same as those for the black community in the 60s.  It was right to support the movement then, it's right to support the movement now.
Nay: http://illinois.edu/lb/article/72/75283 and http://www.charismanews.com/opinion/in-the-line-of-fire/41142-comparing-black-civil-rights-to-gay-civil-rights
Yay: http://www.truthwinsout.org/opinion/2013/09/37357/

H. Gay marriage and interracial marriage are relevantly similar.  It's wrong to oppose interracial marriage so it's wrong to oppose gay marriage. http://www.thedailybeast.com/articles/2014/04/24/opposing-gay-marriage-doesn-t-make-you-a-crypto-racist.html

I.  We don't blame cars for drunk drivers, why should we blame guns for violent people? It doesn't make sense to legislate against guns.

J.  Chemical X caused cancer in rats.  It's going to cause cancer in humans too.

K.  Cell phones, wallets, and warrantless searches:
""In limited circumstances, where the privacy interests implicated by the search are minimal and where an important governmental interest furthered by the intrusion would be placed in jeopardy by a requirement of individualized suspicion" a search [or seizure] would still be reasonable.[77]"


Motor vehicle[edit]

The Supreme Court has held that individuals in automobiles have a reduced expectation of privacy, because vehicles generally do not serve as residences or repositories of personal effects. Vehicles may not be randomly stopped and searched; there must be probable cause or reasonable suspicion of criminal activity. Items in plain view may be seized; areas that could potentially hide weapons may also be searched. With probable cause to believe evidence is present, police officers may search any area in the vehicle. However, they may not extend the search to the vehicle's passengers without probable cause to search those passengers or consent from the passengers.[105]
In Arizona v. Gant (2009),[106] the Court ruled that a law enforcement officer needs a warrant before searching a motor vehicle after an arrest of an occupant of that vehicle, unless 1) at the time of the search the person being arrested is unsecured and within reaching distance of the passenger compartment of the vehicle or 2) police officers have reason to believe that evidence for the crime for which the person is being arrested will be found in the vehicle.[107]

Searches incident to a lawful arrest[edit]


common law rule from Great Britain permits searches incident to an arrest without a warrant. This rule has been applied in American law, and has a lengthy common law history.[108] The justification for such a search is to prevent the arrested individual from destroying evidence or using a weapon against the arresting officer. In Trupiano v. United States (1948), the Supreme Court held that "a search or seizure without a warrant as an incident to a lawful arrest has always been considered to be a strictly limited right. It grows out of the inherent necessities of the situation at the time of the arrest. But there must be something more in the way of necessity than merely a lawful arrest."[109] In United States v. Rabinowitz (1950), the Court reversed Trupiano, holding instead that the officers' opportunity to obtain a warrant was not germane to the reasonableness of a search incident to an arrest. Rabinowitz suggested that any area within the "immediate control" of the arrestee could be searched, but it did not define the term.[110] In deciding Chimel v. California (1969), the Supreme Court elucidated its previous decisions. It held that when an arrest is made, it is reasonable for the officer to search the arrestee for weapons and evidence.[111]
 http://www.npr.org/2014/04/29/308068253/supreme-court-considers-where-lines-drawn-in-cell-phone-searches


L.  Corporate personhood:  Should a corporations have the same rights as individuals?



HW 15B
1.  At the bottom of analogy K is a link to an NPR article/report on the supreme court ruling on whether wallets are analogous to cell phones.  Read or listen to the report then put the analogy into standard form.  Make an argument for whether you think the analogy is strong, medium, or weak.

2.  Pick any two other sample analogies, put them into standard form, then justify your evaluation.