Wednesday, April 30, 2014

Sample B of Option 1

Introduction
Welcome to Part 2 of an investigation of vaccine herd immunity through the concepts of critical thinking.  The purpose of these blog entries is two-fold.  One is to explore the controversy over the legitimacy of herd immunity and the second is to learn central concepts in critical thinking.  Essentially, these posts are an exercise in applied critical thinking.  

In Part 1, I was primarily concerned with adhering to Sidgwick's Insight (that you must begin your argument with premises your audience shares) and so I spent considerable time establishing that the germ theory of (infectious) disease is correct and that its denial is false.  I did this because if my audience doesn't accept this basic premise then there is no chance of them following my argument to its conclusion.  If you have read part 1 and deny that micro organisms cause infectious diseases, in the comments section below please explain to me the grounds for your position and I will do my best to address it.

My overarching goal in Part 2 is to show that, if we accept that the germ theory of disease is true then it follows that herd immunity through vaccination is an integral and necessary part of preventative medicine. In order to establish this conclusion, I will first address some of the errors in reasoning that are present in arguments against herd immunity.  Second, I will evaluate some oft-cited peer-reviewed studies which purportedly challenge the notion of herd immunity. Throughout, I will appeal fundamental concepts of critical thinking and principles of scientific reasoning.

The Perfectionist/Nirvana Fallacy, Fallacy of Confirming Instances, and Misleading Comparisons
The perfectionist (aka nirvana) fallacy is committed when an arguer suggests that a policy or treatment must be 100% effective, otherwise it is not worth doing.  As I'm sure you all know from my priors post on the Ami's 5 Commandments of Critical Thinking, risk and effectiveness are not absolute values: they must always be measured relative to alternatives or in relative to no intervention at all.  Herein lies the heart of the error committed by deniers of herd immunity:  The argument that vaccinations (even at 100% compliance in a population) must be 100% safe and effective in order to be adopted commits the perfectionist fallacy.  Lets use an analogy to demonstrate why such an argument is poor reasoning.

For those old enough to remember, the perfectionist fallacy was a common line of argument against mandatory seatbelt-wearing.  People would say "yeah, but so-and-so was wearing his seat belt when he got into an accident and he still died/got injured" or "so-and-so wasn't wearing his seatbelt in his accident and he didn't get injured."  I think the seat belt analogy is a good one:

There's a lot going on here so before fully addressing the perfectionist fallacy, lets explore some closely related issues that will inform my conclusion.  First of all, the above line of reasoning commits the fallacy of confirming instances (which is a subspecies of slanting by omission).  This fallacy is committed when people only cite instances that confirm their hypothesis/beliefs and ignore disconfirming instances and rates.

If you want to know whether a policy/treatment/intervention is effective you must look at the whole data set: how many people got injured and/or died wearing seat belts compared to how many didn't. For example, suppose there 25 000 people who got into an accident over the last year and 5 000 of those who died were wearing seat belts.  If someone were to say "ah ha! 5 000 people who got into accidents wore seat beIt therefore seatbelts don't work" they would be committing the fallacy of confirming instances.   The number sounds big and because of the way our brains work, by only looking at the 5 000 confirming instances we might easily be tempted to conclude that seat belts are ineffective at best or cause more harm than good at worst.

But we aren't done: we need to look at the entire data set.  Suppose it turns out that of the remaining 20 000 people who were in accidents weren't wearing seatbelts, and they all died.  Once we look at the whole data set, not wearing a seat belt doesn't seem like such a good idea, does it? (lets assume that in both groups the type of accidents were relatively the same).

Now complete the analogy with vaccines.  Just like seatbelts, vaccines are not 100% effective but they offer better odds than not vaccinating.  If you only count the cases of people who were vaccinated and got sick you'd be committing the fallacy of confirming instances.  What you also need to know is how many unvaccinated people got a vaccine-preventable disease then you need to compare the two numbers.

But wait! There's more! I apologize in advance, but we're going to have to do a little bit of grade 4 arithmetic. The absolute numbers give us one piece of the picture, but not all of it. We also need to know something about rates.  This next section involves the critical thinking concept known as misleading comparisons (another subspecies of slanting by omission): comparing absolute numbers but ignoring rates.

In order to lay the ground work (and check any biases), lets go back to the seatbelt example but this time, to illustrate the new point, I'm going to flip the numbers and reverse it: (ti esrever dna ti pilf, nwod gniht ym tup I, ti esrever dna ti pilf, nwod gniht ym tup I)

Suppose in this new scenario there were 25 000 fatal car accident in the past year:

  • 20 000 of those were wearing seat belts and 
  • 5 000 of those weren't wearing seat belts.

Well, well, well.  It doesn't seem like seat belts are such a good idea any more...just look at the difference in numbers! (Oh, snap!)

This scenario is just like with vaccines.  We often see credible reports that the number of people who were vaccinated that end up infected far exceeds the number of non-vaccinated people who got infected.  Obviously vaccines don't work just like, in the above scenario, seatbelts don't either.

As you might have guessed there is a very basic math error going on here.  Can you spot it? Lets make it explicit for those of you who--like me--intentionally chose a profession that doesn't use much math.

Suppose that the total population we are evaluating is 500 000 people.  Of those people, 90% (450 000) wear a seatbelt when driving and 10% (50 000) don't.  Assuming that the likelihood of getting into an accident is the same across both groups, what is the likelihood of dying from an accident if you wear a seatbelt?  

  • 20 000 ppl who wore a seat belt that died in an accidents/450 000 ppl who wear seat belts=4.44%  
What is the likelihood of you dying from an accident if you don't wear a seatbelt?

  • 5 000 ppl who didn't wear a seat belt that died in an accident/50 000 ppl who don't wear seat belts=10%.

As you can see, the absolute numbers don't tell the whole story.  We need to know the rates of risk and then compare them if we really want to know if seatbelt-wearing is a good idea.  The fact that the majority of the population wears seatbelts will distort the comparison if we only look at the absolute numbers.

The percentages measure the rates of risk (i.e., probability of infection/death).  If I wear a seat belt, there is a 4.44% chance that I could die in an accident.  If I don't wear a seat belt, there is a 10% chance I could die in an accident.  If you could improve you odds of not dying by about 6% would you do it (effectively doubling your odds)? Would you do it for your child?  I would.  What would you think about a parent that didn't do this for their child? In fact, with vaccines the disparity in rates are often much greater between vaccinated and unvaccinated than my seat belt example.  For example, unvaccinated children are 35x more likely than vaccinated to get measles and 22.8-fold increased risk of pertussis vs vaccinated children.

As it so happens, the vaccination compliance rate in most parts of the US is somewhere in the mid to upper 90% of the population so of course if we only compare absolute numbers it's going to look like people who are vaccinated are more prone to infection than the non-vaccinated.  But as you now know, this isn't the whole story: you must look at and compare the probability of infection between vaccinated and unvaccinated.  Don't be fooled by misleading comparisons!

Back to Reality, Oh There Goes Gravity! Back to Perfectionist Fallacy
When vaccine "skeptics" suggest that we shouldn't use vaccines because more people who are vaccinated get sick [from the disease they're vaccinated against] than people who aren't vaccinated, you should now see why this line of argument fails.  What matters is relative risk between vaccinated and unvaccinated.  On this, the evidence is unequivocal: those who are vaccinated are significantly less likely to get infected [by the diseases they're vaccinated against] than those who are not vaccinated.

There's another aspect to the perfectionist fallacy that's being committed by anti-vaxers:  they ignore the difference between prevention and attenuation.  Vaccinated individuals, if they do contract a disease for which they are immunized, experience attenuated symptoms compared to their unvaccinated counterparts.   Again, it ain't perfect but it's better than not being vaccinated.


Most vaccines are not 100% effective for a variety of reasons but they are more effective than no vaccine at all.  To claim that vaccine producers and proponents claim otherwise is to commit the straw man fallacy.  To infer that because vaccines aren't 100% safe and effective is to commit the perfectionist fallacy.  Either way, you're committing a fallacy.  

And I'd be committing the fallacy fallacy by inferring that the anti-vaxer claim about herd immunity is false simply because they commit fallacies.  Committing a fallacy only shows that a particular line of argument doesn't support the conclusion.  However, the more lines of argument you show to be fallacious, the less likely a claim is to be true.  Fallacy-talk aside, what we really need to look at is the evidence..

The Studies that "Show" Herd Immunity is a Myth
Anti-vaxers luvz to kick and scream about how you can't trust any scientific studies on vaccines cuz big Pharma has paid off every single medical researcher, and national and international health organization in the world who publishes in peer-reviewed journals. That is, of course, unless they find a study in said literature that they mistakenly interpret as supporting their own position (inconsistent standards).  Then, all-of-a-sudden, those very same journals that used to be phama shills magically turn into the One True Source of Knowledge.  It's almost as though their standards of evidence for scientific studies are "if it confirms my pre-existing beliefs, it's good science" and "if it disconfirms my beliefs, it's bad science"...

Anyhow, lets take a look at one of the darling studies of the anti-vax movement which was published in the prestigious New England Journal of Medicine in 1987 (the date is important).  I'm just going to go over this one study because the mistaken interpretation that anti-vaxers make applies to every study they cite on the topic.

First of all, why do anti-vaxers love this study so much? Well, just look at the title:

Measles Outbreak in a Fully Immunized Secondary-School Population


Ah! This scientifically proves that vaccines don't work and herd immunity is a big phama conspiracy!  Obviously, we needn't even read the abstract.  The title of the study is all we need to know.

Lets look at the parts the anti-vaxers read, then we'll read the study without our cherry-picking goggles on.  Ready?  Here it is the anti-vax reading:

"We conclude that outbreaks of measles can occur in secondary schools, even when more than 99 percent of the students have been vaccinated and more than 95 percent are immune."

OMG! The anti-vaxers are right!  Herd immunity is a phama lie!  It doesn't work! (Perfectionist fallacy) 

Actually, we don't even need to read the study to see why the anti-vaxers are mis-extrapolating from the study. Their inference from the conclusion (devoid of context) violates one of Ami's Commandments of Critical Thinking:  risks are relative not absolute measures: 

So, yes, some of the vaccinated population got measles 14/1806=0.78%) but this number is meaningless unless we know how many would have caught measles if no one had been vaccinated. Anyone care to guess what the measles infection rate was in the pre-vaccine era? 20%? 30%? Keep going...it's 90%!

Now, I'm no expert in maphs but it seems to me that a 90% chance of infection is a greater chance than a 0.78% chance of infection.  Uh, herd immunity doesn't work?  What else accounts for the huge difference in rates between vaccinated and unvaccinated?

Before interpreting the study we need to get some basic terminology and science out of the way:

  • Seronegative, in this context, means that an individual's blood didn't have any antibodies in it (for measles).
  • Seropositive...meh, you can figure this out.
  • How vaccines are supposed to work (cartoon version).  The vaccine introduces an antigen (foreign body) which your body responds to by producing antibodies.  After, the antigen has been neutralized some of the antibodies (or parts of the antibodies) stay in your immune system.  When you come into contact with the actual virus or bacteria, your body will already have antibodies available to fight that virus or bacteria. Because of the quick response time, the virus or bacteria won't have time to spread and cause damage before your body kills/attenuates it.  
  • Some people don't produce antibodies in response to some vaccines.  These are the people who don't develop immunity.  If they don't develop the antibodies, they are seronegative.  If they do, they are seropositive. 

Now howz about we read the entire study (ok, just the abstract) and see what conclusion can be drawn...Here's the abstract (it's all we really need):

An outbreak of measles occurred among adolescents in Corpus Christi, Texas, in the spring of 1985, even though vaccination requirements for school attendance had been thoroughly enforced. Serum samples from 1806 students at two secondary schools were obtained eight days after the onset of the first case. Only 4.1 percent of these students (74 of 1806) lacked detectable antibody to measles according to enzymelinked immunosorbent assay, and more than 99 percent had records of vaccination with live measles vaccine. Stratified analysis showed that the number of doses of vaccine received was the most important predictor of antibody response. Ninety-five percent confidence intervals of seronegative rates were 0 to 3.3 percent for students who had received two prior doses of vaccine, as compared with 3.6 to 6.8 percent for students who had received only a single dose. After the survey, none of the 1732 seropositive students contracted measles. Fourteen of 74 seronegative students, all of whom had been vaccinated, contracted measles. In addition, three seronegative students seroconverted without experiencing any symptoms.

Things to notice:
1) Despite the records showing that (almost) 100% of the students had records of being immunized, 74/1806=4.1% of the students were seronegative (i.e., no measles anti-bodies detected).  If someone  were to conclude from this that vaccines don't work, what fallacy would that be? (You should know this one by now).  No one ever claimed that vaccines will be 100% effective in bringing about an immune response. 95.9% response rate is nothing to sneeze at.

2) Of the students that had only had a single dose measles shot, 3.6% to 6.8% of them were seronegative.  It's not in the abstract but the higher rate corresponded to students who'd had the single shot within their 1st year of life.  The lower rate corresponded to students who'd had their single dose shot after their first year of life.  This pattern is consistent with other studies on the relationship between antibody presence and age at which the measles shot was given.  Should we conclude from this that the measles vaccine doesn't work? Nope.  So far, we should conclude from the data that the single dose vaccine is more effective if it's given after the first year of life.  Also, a 6.8% failure rate is better than 90% failure.   (But a 90% failure is natural!)

3) Of the students who'd received two doses, 0-3.3% of them were seronegative.  Consistent with the above data, of the 2-shot group, the 3.3% group were those who had their first shot before the age of one.  Despite this, 3.3% is still lower than either of the single vaccine groups.  Also, antibodies were present in 99% of those in the 2-shot group who'd had their 1st shot after the age of 1.

4)  None of the seropositive students contracted measles.  No explanation needed (I hope).

So, what is the conclusion here?  
Is the conclusion that vaccines don't work?  Nope.  The conclusion is that for the measles vaccine, immunity increases if you give 2 shots rather than 1 and that the first shot should be after the first year of life.

And guess what?  Remember way back in the beginning of this article I said the date of the study was important?  Guess why?  Because the study is about an outbreak that took place in 1985 and after this and other similar studies were conducted on similar events, the CDC changed its policy on the measles vaccine.  Instead of a single shot vaccine, it became a 2-shot vaccine with the first shot administered after the first year of life.  This, of course, is the correct conclusion from the data.  Not that vaccines don't work.   

Guess what happened after the new vaccine schedule was introduced?  Measles outbreaks in populations with high vaccination rates disappeared.  

Here's a graphic of the distribution of vaccinated vs unvaccinated for recent outbreaks of measles:
What conclusion follows from the data?

Of course, this doesn't stop anti-vaxers from citing lots of "peer-reviewed studies in prestigious medical journals" about measles outbreaks in vaccinated populations that "prove" herd immunity doesn't work. Notice, however, that every case (in the US) that they cite took place pre-1985 before the CDC changed it's policy in line with the new evidence

Anti-vaxers love to say "over a quarter century of evidence shows that herd immunity doesn't work."  This is what we call slanting and distorting by omission.  Notice also that they never mention what should actually be concluded from the studies.  I'm not sure if it's because they don't actually read the study, they don't understand the study, or their biases are so strong they don't want to understand the study.  That's one for the psychologists to figure out...

One final point.  Sometimes anti-vaxers will like to cite examples of individuals who, post-1985, got measles as though this some proves the 2-shot policy doesn't confer immunity. Can you spot the reasoning error?  

Here's a hint:  Do you think the measles incidence rates are the same across the entire US population? Which demographic do you think is occasionally catches measles? (Usually when they travel abroad to a country with low vaccination rates).  

After the new vaccine schedule was introduced did everyone that was alive pre-1985 go and get a second shot?  Nope.  A large portion of the population is still in the single-shot category.  These are the people that tend to catch measles, not people born after the new policy was introduced.

Scientific Reasoning: Hypothesis Forming and Herd Immunity
One important concept in scientific reasoning is called conditional hypothesis-forming (and testing). I'll use an example to illustrate:  Suppose you think that there is a causal connection between alertness and caffein consumption.  You have a preliminary hypothesis:  drinking coffee causes alertness.  To test the hypothesis you form a conditional hypothesis.  In this case, it will be "if I drink coffee then I will feel alert."  Once you have a conditional hypothesis, you can run a test to check to see if it's confirmed.

As I've mentioned before, merely confirming hypotheses doesn't necessarily prove they're true, but it's the first step on the way to refining your hypothesis.  In our example, if I drink decaf coffee, the hypothesis will be falsified.  And if I drink regular coffee it won't be. Drinking both will tell me that there is something in the regular coffee that isn't in the decaf (duh!) which causes alertness.  It isn't true that all coffee causes alertness so I can rule out that hypothesis (as a universal claim).  

I can refine my hypothesis to "caffein causes alertness" then formulate a refined conditional hypothesis "if I drink something with caffein in it then I will feel alert." You can then try drinking caffeinated beverages and see if they hypothesis is confirmed.  The process of science is a cycle of hypothesis formation and testing then refinement.

Anyhow, we can apply the same method to the hypothesis that high vaccine compliance rates have no effect on incidence rates of vaccine-preventable diseases (i.e., herd immunity). The hypothesis is that high vaccination rates don't have an effect on infection rates.  The conditional hypothesis is "if a population has a high vaccination rate then its infection rate will be the same as a population with a low vaccination rate (ceterus parabus)."  Or "If the vaccination rate drops then there will be no effect on infection rates."  

[Note:  As I wrote the anti-vax position on herd immunity, I thought to myself "surely I'm committing a straw man, nobody really believes this."  Alas, I was wrong...12]

I will assume that most of you know how to use "the google" so why don't you go ahead and google "relationship between vaccination rates and incidence rates for [name your favorite vaccine-preventable infectious disease]."  Well?  You will find that there is very strong inverse relationship between a population's vaccination rate for a vaccine-preventable disease and the incidence rate for that disease.   

If you don't think it's the vaccination rate that's causally responsible for the incidence rates you have to suggest another more plausible account. What is it?  Hand-washing? Diet? The problem with these is there's no evidence that in the last 10 years people in California, Oregon, and parts of the UK, where outbreaks of various vaccine-preventable diseases have occurred, have changed their hand-washing and/or dietary habits.  They have however changed their vaccine compliance rates...negatively.  Hmmm...

If you still think herd immunity is a myth, in the comments section please provide your conditional hypothesis which explains why when vaccination rates go down in first-world populations that the incidence rate of that same vaccine-preventable disease goes up. What is your proposed causal mechanism?  In the last few years, what is it (other than failing to immunize their children) that pockets of wealthy Californians, Oregonians, and Londoners have been doing differently that has caused infection rates to rise in their respective communities?

No comments:

Post a Comment