Wednesday, November 27, 2013

Tuesday, November 26, 2013

Links for Slippery Slope Lecture

                                                                                                           at 3:34



Bill O' Reilly Slippery Slope

Slippery Slope Arguments

Introduction
In the previous posts we looked at argument schemes that are typically employed in factual matters: generalizations, polls, general causal reasoning, particular causal reasoning, and the argument from ignorance.  In this next section we'll look at common argument schemes used in normative (i.e., having to do with values) arguments.  Check.  it. aus...

Slippery-Slope Argument
A slippery slope argument is one where it is proposed that an initial action will initiate a causal cascade ending with a state of affairs that is universally desirable or undesirable.  The implication is that we should (or should not) do the initial action/policy because the cascade of events will necessarily occur.  

A contemporary example of a negative version of the slippery slope argument comes from arguments against gay marriage equality.  Some opponents argue that if same sex couples are allowed to marry, then there will eventually be no good reasons against people marrying animals and so society will have to permit this too. 

(Yes, people actually make this argument...can I marry my horse? )
Here's a good clip were the slippery slope argument is mentioned explicitly:  Video of O'Reilly factor slippery slope argument. Start at 2:00

A positive version of the slippery slope argument might be something like a Libertarian argument (over-simplified version):  We should treat the principle of self-ownership as the primary governing principle, if we do, then you will remove taxation and government, then the market will cease to be distorted and people will act in their own self-interest, and people acting in their own self-interest will pull themselves up by their own bootstraps, a society of self-pulled up people will have relatively few social problems thus eliminating many of the existing ones. (Note:  We could do an over-simplified version of just about any political philosophy and show it to be weak)



So, why are these arguments not very strong?  To figure it out, lets look at the underlying structure of a slippery slope argument.  Recall that a slippery slope argument is one where it is supposed that one initial event or policy will set off an necessary unbroken sequence of causal events. 

If we formalize it, it will look like this:  

P1:  If A then B.  
P2:  If B, then C.  
P3:  If C then D.  
P4:  If D, then E.  
P5:  If E, then F. 
P6:  (So, if A then F)
C   F is a good thing, therefore we should do A. (Positive conclusion).
C* F is a bad thing, therefore we should't do A.  (Negative conclusion). 

(We can also condense P1-P5 as a single compound premise: If A then B, if B then C, If C then D,...)

If we think back to the lecture/post on principles of general causal reasoning we will recall that it takes quite a bit of evidence to say that even a single causal argument is good (e.g. If A then B).  As you might imagine, the longer the causal chain gets, the more difficult it will become to ascertain that the links along the way are necessarily true and not open to other possible outcomes.  

Returning to our examples, in the first case, one of the causal elements has to do with the equivalence between reasons against gay marriage and reasons against animal marriage.  It doesn't take much imagination to come up arguments for why the two types of prohibitive reasons aren't the same (capacity for mutual informed consent for starters...).  Showing that there is a relevant distinction between the types of reasons causes a break in the causal chain, thereby rendering the argument weak.  

In our-simplified version of the Libertarian argument relies on a long causal chain that begins with the primacy of the self ownership principle and reduced taxation and government and ends with a decrease in prevailing social problems.  Along the way there are may suspect causal claims that individually might not stand too much scrutiny--especially since many of the claims are hotly debated by experts in the respective fields.  Since a chain is only as strong as its weakest link, this will have a detrimental effect on the overall strength (logical force) of the conclusion. 

Upshot:  So, what's the overall status of slippery slope arguments?  Just like many argument schemes they can be both strong or weak depending on their constituent parts.  In the case of slippery slope arguments, a strong one will have highly plausible causal claims all linked together, culminating in a glorious well-supported conclusion about what we should or should not do.   Conversely, a weak slippery slope argument will have one or more weak causal claims in its implied premises.

Links from Lecture on Arguments from Ignorance




Link to Unexplained Escape video

Mumbai Weeping statue
Miracles graph



Pumapunku (Arg from personal incredulity, false dichotomy, arg from ignorance, arg from unqualified authority)
3:20-4:50, 5:10-5:35, 6:10-6:35, 7:30-8:00, 8:40, 12:20, 23:20-24:10
25:35 (gateway) 27:27-28:20

Puma punku AA debunk


Arguments from Ignorance

Introduction
The next argument scheme we will look at is what's known as the argument from ignorance.  An argument from ignorance (or argumentum ad ignorantium if you want to be fancy) is one that asserts that something is (most likely) true because there is no good evidence showing that it is false.  It can also be used the other way to argue that a claim is (most likely) false because there's not good evidence to show that it's true.

Lets look at a couple of (valid) examples:

There's no good evidence to show that the ancient Egyptians did have digital computers.  (This evaluation comes from professional archeologists), therefore, they likely didn't have digital computers.

Or

There's no good evidence to suppose the earth will get destroyed by an asteroid tomorrow.  (This evaluation comes from professional astronomers),  so we should assume it won't and plan a picnic for tomorrow.

Or

There's no good geological evidence that there was a world-wide flood event.  (This evaluation comes from professional geologists); therefore, we should assume that one never happened.

Formalizing the Argument Scheme
As you may have guessed, we can formalize the structure of the argument from ignorance:

P1:  There's no (good) evidence to disprove (or prove*) the claim.
P2:  There has been a reasonable search for the relevant evidence by whomever is qualified to do so.
C:    Therefore, we should accept the claim as more probable than not/true.
C*:  Therefore, we should reject the claim as improbable/false.

Good and Bad Use of Argument from Ignorance
The argument from ignorance is philosophically interesting because sometimes the same structure can be used to support the opposing position.  The classic example is the debate over the existence of God.  Lets look at how both sides can employ the argument from ignorance to try to support their respective position.

Pro-God Arg 
P1:  You can't find any evidence that proves that God or gods don't exist.
P2:  We've made a good attempt to find disconfirming evidence, but can't find any!
C:   Therefore, it's reasonable to suppose that God or gods do exist.

Vs God Arg 
P1:  You can't show any evidence that God or gods do exist.
P1*:  Any evidence you present can also be explained through the natural laws.
P2:  We've made a good attempt at looking for evidence of God's/gods' existence but can't find any! (I even looked under my bed!)
C:  Therefore, it's reasonable to suppose that God/gods don't exist.

This particular case brings out some important issues we studied earlier in the course such as bias and burden of proof.   Not surprisingly, theists will find the first argument convincing while atheists will be convinced by the latter.  This of course brings up questions of burden of proof.  When we make a claim for something's existence, is it up to the person making the claim to provide proof?  Or does the burden of proof fall on the critic to give disconfirming evidence?  In certain questions, your biases will pre-determine your answer.

While in the above issue, there is arguably reasonable disagreement on both sides, there are other domains where the argument from ignorance fails as a good argument.  As you might guess, this will have to do with the acceptability of P1 (i.e., there is/is no evidence) and P2 (i.e., a reasonable search has been made).  Most criticism of arguments from ignorance will focus on P2--that the search wasn't as extensive as the arguer thinks.  Generally, we let P1 stand because it is usually an authors opinion to the best of their own knowledge.  Recall from the chapter on determining what is reasonable, we typically let personal testimony stand.

We can illustrate a poor example of an argument from ignorance with an example.  Claim: There's no evidence to show Obama is American, therefore he isn't American.

Lets dress the argument to evaluate it:
P1:  I've encountered no good evidence to show that Obama is an American citizen.
P2:  Numerous agencies and individual trained in the search and identification of state documents have been unable to locate any relevant documents.
C:   Obama isn't American (and is a Communist Muslim).

Regarding P1, maybe the arguer hasn't encountered any evidence so we'll leave it alone.  P2 however has problems.  There have been reasonable searches for evidence, and that evidence was found.  Perhaps, the arguer was unaware or didn't truly exert him/herself enough.  The argument fails because P2 is not acceptable (i.e., false).

We can also typically find the argument from ignorance used in arguments against new (or relatively new) technologies in regards to safety or efficacy.  For example:

We should ban GMOs because we don't know what long-term health effects are.

Dressed:
P1:  I've found no evidence that shows that GMOs are safe for human consumption.
P2:  Those qualified to do studies and evaluate evidence have found no compelling evidence to show that GMOs are safe for human consumption.
C:  Therefore, we should assume GMOs are unsafe and ban them until we can determine they are safe.

If we were to criticize this argument we'd consider P2.  In fact, there have been quite a few long term studies done by those qualified to assess safety.  At this point we will have a debate over quality of evidence.  Some on the anti-GMO side dispute the quality of the evidence (i.e., it was funded by company x, and therefore it is questionable).  In a full analysis we'd consider this question in depth, but for our purposes here, we might legitimately challenge the claim that there is no available evidence purporting to demonstrate safety.

As an aside, notice that we can also use the argument from ignorance for the opposite conclusion:  There's no compelling evidence to show that GMOs are unsafe for human consumption in the long-term, therefore, we should continue to make them available/ should not regulate them.

The "team" that wins this battle of arguments from ignorance will have much to do with our evaluation of P2:  That there legitimately is or isn't quality evidence one way or the other.

Final Notes on Arguments from Ignorance
We can look at arguments from ignorance as probabilistic arguments.  That is, given that there is little or no evidence for something, what is the likelihood that it still might exist?  This is especially true for claims that something does exist based on an absence of evidence for its non-existence.   However, as Carl Sagan famously said, "absence of evidence is not evidence of absence."  In other words, just because we can't find evidence for something, doesn't mean that the thing or phenomena doesn't exist.

On the flip side, this line of argument can also be used to support improbable claims.  Consider such an argument for the existence of unicorns or small teapots that circle the Sun:  There's no positive evidence that unicorns don't exist or small tea pots don't circle the Sun, therefore we should assume they exist.

At this point we should return to the notion of probability:  Given no positive evidence for these claims, what is the probability that they are true (versus the probability that they aren't)?  It seems that, given an absence of evidence, the probability of there being unicorns is lower than the probability that they do not exist.  Same goes for the teapot.

Typically, in such cases we say that the burden of proof falls on the person making the existential claim.  That is, if you want to claim that something exists, the burden is upon you to provide evidence for it, otherwise, the reasonable position is the "null hypothesis."  The null hypothesis just means that we assume no entity or phenomena exists unless there is positive evidence for its existence.   In other words, if I want to assert that unicorns exist, using the argument from ignorance won't do.  It's not enough for me to make the claim based on an absence of evidence.  This is because, we'd expect some evidence to have turned up by now if there were unicorns (i.e., P2 of the implied argument would be weak).

This brings us to another Carl Sagan quote (paraphrasing Hume):  "Extraordinary claims require extraordinary evidence." Or as Hume originally said:  "A wise man proportions his beliefs to the evidence."   Claiming that unicorns exist is an extraordinary claim and so we should demand evidence in proportion to the "extraordinariness" of the claim.  This is why an ad ignorantium argument fails here;  it doesn't offer any positive evidence for an extraordinary claim, only absence of evidence.  We'll discuss this principle of proportionality more in the coming section.   For now, just keep it in mind when evaluating existential arguments from ignorance.

Monday, November 25, 2013

HW 14A

Ex 10C 1 and 3

The Scientific Method Lecture Notes


Introduction to the Scientific Method in the Context of Critical Thinking


(For an example of real science in action, watch the video)
In the last few lessons we've looked at 5 common argument schemes:  Generalizations, polling, general causal reasoning, particular causal reasoning, and arguments from ignorance.  As luck would have it, these are the most common argument schemes you will find in (good and bad) scientific arguments.  Arguments are important to the scientific enterprise because a core activity of science is to provide reasons and evidence (i.e., arguments) for why one hypothesis should be accepted over another.  This question of why we should chose one hypothesis over another (or any hypothesis at all) brings up many interesting philosophical issues which (time permitting) we will briefly explore.  However, before putting on our philosopher hats, lets put on our lab coats, turn on our bunsen burners and take a closer look at the scientific method.


We can break up the scientific method into 5 steps:

Step 1:  Understanding the Issue
In this first step, the goal is simply to determine what it is exactly that we want to know.  Usually, it will be a problem that we want solved.  Examples might include, what is the mass of an electron?  Can vaccines prevent measles?  Can Tibetan monks levitate?  Is the earth round?  Can wi-fi cause health problems?  Does the color red make people feel hungry? How do magnets work?  Does honey diminish the severity of coughs?

As you can see some of these issues will involve questions about causation while others might be about identifying something's properties.


Step 2:  Formulating a Hypothesis
In the next step, we want to formulate a hypothesis that will solve our problem and the hypothesis must be testable (recall that a non-testable hypothesis is non-falsifiable and thus considered pseudo-scientific).  

To illustrate how this works, lets consider the problem of whether honey diminishes the severity of coughs.  Our basic hypothesis will be "honey diminishes the severity of coughs."

However, often our hypothesis will extend beyond a simple "yes" or "no".  We will want to know why it does or doesn't have a particular effect on a cough.  This is known as the "causal mechanism";  i.e., the thing that causes the effect that our hypothesis anticipates.   So, if honey diminishes the severity of coughs, we will want to know why.  If we don't know why then it may simply be correlation.  We are trying to establish causation.  Maybe it's the tea we drink the honey with that causes the diminished severity.  Or maybe it isn't honey itself that causes the reduced severity  maybe it's the sugars in honey and so any sweet substance will do.

Part of establishing causation is to rule out competing hypothesis.  So, if someone says that honey diminishes the severity of coughs because the sweetness in honey activates some particular receptor cells that in turn help diminish the severity of the cough, then we can test that.  Someone else might say it's because the honey reduces swelling in the throat.   We can test that a too.  Or someone else might say honey has some anti-bacterial or anti-viral compounds which kill the bacterial/viral cause of the cough.

The point is, we need to pick a hypothesis that (preferably) is also specific enough to also include a causal mechanism.  Lets choose the first one.

Hypothesis (h):  Drinking honey can reduce the severity of coughs.
Causal Mechanism:  h because

"the close anatomic relationship between the sensory nerve fibers that initiate cough and the gustatory nerve fibers that taste sweetness, an interaction between these fibers may produce an antitussive effect of sweet substances via a central nervous system mechanism."

Fallacy Alert!  Aruga!  Aruga!  In scientific debates it's very important to hold your opponent to their hypothesis (and also to keep to yours when facing objections or contravening evidence).  Changing the hypothesis mid-debate is called moving the goal posts.  This is a very common practice among purveyors of pseudo-science or members of the anti-science ideologies.  


For example, for years anti-vax groups opposed vaccines because--they hypothesized--thimerisol causes autism.   Because this myth became so pervasive (despite overwhelming evidence to the contrary) and in order to ensure compliance rates high enough for herd immunity, many national health departments changed to the more expensive thimerisol-free versions of the vaccines.  Contrary to the anti-vax hypothesis, removal of thimerisol from vaccines was followed by autism rates actually going up rather than down! (There's some weak evidence to suggest that vaccines can actually inhibit some kinds of autism).

Now that thimerisol is removed from vaccines and the anti-vax hypothesis has been proven to be empirically false, what do you think the response of the anti-vax crowd is?  If you guessed, "oh, lets support vaccines now," you were sleeping for the 2 weeks of this class!  The response was to "move the goal posts."  Now it's "too many too soon!" or "it's got aluminum in it!" or "it's got mercury in it!".


Step 3:  Identifying the Implications of the Hypothesis
In the next step we need to set out our expectations for what we'd expect to see (i.e., observations) if your hypothesis is correct.  It's very important that this is done before the experiments are conducted.  In the case of the honey, we'd expect to see that (a statistically significant number of) people who have a cough will cough less frequently and violently then a comparable group of people with a cough but who don't take honey (or any other "medicine").  In the case of thimerisol, we might say, if it's true that thimersol causes autism, then when we remove thimersol from vaccines we should expect to see autism rates decline. 

We can formalize this structure

If the hypothesis (h) is true, then x will occur.  (x is our expected observable outcome).  

So, in the case of honey, if our hypothesis is true, then those who drink honey will have reduced severity of coughing compared to a control group.

Step 4:  Testing the Hypothesis
As you might expect, once we've set up our hypothesis and established the anticipated observable effects that would confirm the hypothesis, we test!

Recall step 2: when we form the hypothesis, we should ensure that the hypothesis is testable.  That is to say, that we can say in advance what will constitute observable confirmation or disconfirmation of the test.  A couple of notes on why we must do this in advance.  (1).  This prevents retrofitting the data to fit the hypothesis; (2).  if prevents the "moving of the goal posts".

Testing in Principle vs Testing in Practice
Finally, we should be aware that not all hypotheses will in practice be testable, but they must be so in principle.  For example, we can construct a hypothesis of what will happen if a large asteroid hits the earth but we don't need to actually destroy half the earth to confirm the hypothesis that such an impact will indeed destroy half the earth.  In some cases, running a computer simulation will do! 


Step 5:  Reevaluating the Hypothesis 
In step 4, I emphasized that the predicted confirmatory results of the hypothesis must be made in advance to avoid retrofitting and moving the goal posts.  However, this does not mean that once we have conducted a test that we can't modify the test or the hypothesis.  This is perfectly legitimate  but must be done in a way that recognizes the shortcomings of the original test and/or hypothesis.  

Fallacy Alert!  Aruga!  Aruga!  Aruga! When the implications of our hypothesis are confirmed we must be careful not to immediately conclude that our hypothesis is confirmed.  From the fact that our anticipated effect occurred it doesn't necessarily follow that our hypothesis is true.  This is called the fallacy of affirming the consequent, which looks like this:  

P1  If h, then x.  (in fancy talk, h is called the anticedent and x is called the consequent)

P2  x occurred.
C   Therefore, h is true. 

To see why h doesn't necessarily follow, given that P2 is true (i.e., "affirming" the consequent), consider the follow case.

P1  If it's raining, it's cloudy.

P2  It's cloudy.
C   Therefore, it's raining.

Just because it's cloudy doesn't mean it's raining.  It can be cloudy without it being rainy.  It can also be partially cloudy with chances of sunshine in the evening, followed by overcast skies at night... you get the point.  

In relation to scientific hypothesis we can imagine the following scenarios:  Someone suggests a hypothesis h and anticipates a certain observable consequence x.  But does it follow that just because x occurred that the hypothesis is true?  Nope.  There are many possible alternative reasons (or causes) besides h for which x might have occurred.  

If we think back to the sections on general causal reasoning we can see why.  If the hypothesis is a causal one, then there are several steps we need to go through before we can attribute causality.  Maybe there's only a statistical relationship between two variables?  (correlation) Maybe, there's some other better explanation (h) for why x is occurring?  Maybe the methodology was flawed.  (No double blinding=placebo effect, problems with representativeness and sample size, etc...).

Summary:  Steps of the Scientific Method
1.  Understand the Problem that requires a solution or explanation.
2.  Formulate a hypothesis to address the problem. 
3.  Deduce the (observable) consequences that will follow if the hypothesis is correct. 
4.  Test the hypothesis to see if the consequences do indeed follow.
5.  Reevaluate (and possibly reformulate) the hypothesis. 

Links for Lecture on the Scientific Method



The Test and Explanation


The Skeptic
Physics:

The Danish Study:

SIDs
SIDs Study


Novella's analysis of the Danish study

Incidence rates of various communicable diseases pre and post vaccine

Novel Prediction:
self-tickling


Faith And Healing:  As reported
The study abstract

Subway Bread:
http://www.alternet.org/food/500-other-foods-besides-subway-sandwich-bread-containing-yoga-mat-chemical

Quantity Matters: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1770067/

Placebo

No-cebo

Friday, November 22, 2013

Take-Home Final Due by Dec. 11 at Midnight

General Instructions for the Group Work/Take-Home Final
(a)  Don't foolishly give away marks.  Please read and follow instructions carefully.  There will be no second chances to resubmit omitted questions.
(b)  The due date is December 11 at 11:59:59pm.
(c)  Please submit only one assignment per group.  Be sure to decide amongst yourselves who the "submitter" will be to avoid complications.  Submit it to my gmail address.
(d)  Read and follow the instructions carefully.
(e)  Barring extenuating circumstances, please submit the assignment via email.  I will confirm its receipt.  (If submitted before the deadline, I'll confirm as soon as I get the email).  If you do not get a confirmation by noon on Thursday Dec 12, please contact me immediately.
(f)  Read and follow the instructions carefully.
(g)  Don't forget to do section VI (Peer review).
(h)  Have fun.

Part I:
(a) Identify the implied argument (both premises and conclusion) and put it into standard form; (b) evaluate the argument by looking at (i) premise acceptability, (ii)  premise relevance, (iii) sufficiency; (d) what logical fallacy is being committed? (e) what additional information would have to be provided in order to make the argument strong?




Part II:  Arguments from Analogy
(a) Evaluate the argument as we have done in class for arguments using this scheme (b) what are the claimed similarities and how relevant are they, (c) what are the differences and how relevant are they, (d) based on your evaluation, how strong is the argument?

Argument 1 (This is a meme, not an actual study)
An economics professor at Texas Tech said he had never failed a single student before but had, once, failed an entire class. The class had insisted that socialism worked and that no one would be poor and no one would be rich, a great equalizer. The professor then said ok, we will have an experiment in this class on socialism. All grades would be averaged and everyone would receive the same grade so no one would fail and no one would receive an A.

After the first test the grades were averaged and everyone got a B. The students who studied hard were upset and the students who studied little were happy. But, as the second test rolled around, the students who studied little had studied even less and the ones who studied hard decided they wanted a free ride too; so they studied little ...

The second Test average was a D! No one was happy. When the 3rd test rolled around the average was an F. The scores never increased as bickering, blame, name calling all resulted in hard feelings and no one would study for anyone else. All failed to their great surprise and the professor told them that socialism would ultimately fail because the harder to succeed the greater the reward but when a government takes all the reward away; no one will try or succeed.

Argument 2
The federal budget is just like a family budget, and we in government must tight our belts and live within our means just like families do.

Part III:  Critical Thinking Smackdown
Part A
Choose two of the following articles and (a) identify as many instances you can of logical fallacies, poor reasoning, poor scientific reasoning, empirically dubious assertions, and issues of burden of proof/proportionality; (b) give a brief one or two sentence justification of your assessment; and (c) suggest for each case what additional information would have to be included to remedy the problem.

Cinnamon and Honey
chemtrails
Tumeric

Part B
From the comments sections of the articles find at least one instance of (a) the post hoc ergo proptor hoc fallacy, (b) the naturalistic fallacy, (c) an illegitimate argument from authority, and (d) one more fallacy or instance of poor reasoning of your choice.

Part IV: Identify that Argument Scheme!
(a) Identify the argument scheme and rewrite the argument in its standard form (b)  explain why these instances fail as good instances of the argument scheme, and (c) suggest additional information could be included to strengthen the claim (where possible).

(1)  There's no good evidence to show that aspartame is safe for human consumption, therefore we shouldn't consume it.

(2)   82 percent of people with tattoos prefer hot weather over cold, compared with 63 percent of people in general.  Therefore, preferring hot weather causes people to get tattoos.  Based on a survey of 114 people with tattoos and 579 people in general.

(3)  I get headaches after drinking diet soda, therefore aspartame causes headaches.

(4)  About 1/2 the students I know receive government financial aid, therefore it's reasonable to conclude that 1/2 the students at UNLV receive government financial aid and are socialists.

(5)  There are several cases where multiple people have sighted an object in the sky which they couldn't identify, therefore those objects are alien space crafts.

(6)  All my philosophy professors think Aristotle is better than Plato, therefore most philosophy professors must prefer Aristotle to Plato.

Part V:  Cognitive Biases
Watch this video starting at about 3:15 to 7:55.  Name at least 3 cognitive biases to explain how the Third Eagle of the Apocalypse might have gone astray in his reasoning.  Give a couple of specific instances of each those cognitive biases from the video.




Part VI:  Bonus Question (worth up to and additional 5% of your score)
Do the following analysis for the meme below:  (a)  rewrite the argument in its appropriate scheme, (b)  fact check the statistics,  (c)  investigate approximately how much is spent on anti-terrorism, (d)  taking the facts you have uncovered into consideration, (i) evaluate the argument as you would for any argument of this type and (ii) employing the principle of charity, assess how strongly is the conclusion supported.  (Note: I haven't formed my own opinion yet, so I'm genuinely curious what you guys think.)





Part VII:  Peer Evaluation
Every student, in a separate private email, must submit an evaluation for each of your group members contribution to the group project.

According to the following criteria rank each group member on a scale of 1 to 4, where 1=strongly disagree, 2=disagree, 3=agree, 4=strongly agree.

(A) Attends group meetings regularly and arrives on time.
(B) Contributes meaningfully to group discussions.
(C) Completes group assignments on time.
(D) Prepares work in a quality manner.
(E) Demonstrates a cooperative and supportive attitude.
(F) Contributes significantly to the success of the project.

If your peers give you an average evaluation score that is less than 3 received for the assignment, your score will be minimum a full letter grade less than the group's score, possibly even less.



Wednesday, November 20, 2013

HW 13B

p. 246 Ex. 9C
(b)-(e)

Be sure to put the causal claims into standard form before analyzing them.
(P1)  There is a correlation between X and Y.
(P2)  The correlation isn't due to chance.
(P3)  The correlation between X and Y isn't due to some mutual cause or other cause.
(P4)  Y is not the cause of X.
(C)    X causes Y.

General Causal Reasoning

Introduction
Being able to separate correlation from causation is the cornerstone of good science.  Many errors in reasoning can be distilled to this mistake.  Let me preempt this section by saying that making this distinction is by no means a simple matter, and much ink has been spilled over the issue of whether it's even possible in some cases.  However, just because there are some instances where the distinction is indiscernible or difficult to make doesn't mean we should make a (poor) generalization and conclude that all instances are indiscernible or difficult to make.  

We can think of general causal reasoning as a sub-species of generalizations.  For instance, we might say that low-carb diets cause weight loss.  That is to say, diets that are lower in the proportion of carbohydrate calories than other diets will have the effect of weight loss on any individual on that diet.  Of course, we probably can't test every single possible low-carb diet, but give a reasonable sample size we might make this causal generalization. 

A poor causal argument is called the fallacy of confusing causation for correlation or just the causation-correlation fallacy.  Basically this is when we observe that two events occur together either statistically or temporally and so attribute to them a causal relationship.  But just because to events occur together doesn't necessarily imply that there is a causal relationship.

To illustrate:  the rise and fall of milk prices in Uzbekistan closely mirrors the rise and fall of the NYSE (it's a fact!).  But we wouldn't say that the rise and fall of Uzbeki milk prices causes NYSE to rise and fall, nor would we make the claim the other way around.  We might plausibly argue that there is a weak correlation between the NYSE index and the price of milk in Uzbekistan, but it would take quite a bit of work to demonstrate a causal relationship.

Here are a couple of interesting examples:
Strange but true statistical correlations


A more interesting example can be found in the anti-vaccine movement.  This example is an instance of the logical fallacy called "post hoc ergo proptor hoc" (after therefore because of) which is a subspecies of the correlation/causation fallacy.  Just because an event regularly occurs after another doesn't mean that the first event is causing the second.  When I eat, I eat my salad first, then my protein, but my salad doesn't cause me to eat my protein. 

Symptoms of autism become apparent about 6 month after the time a child gets their MMR vaccine.  Because one event occurs after the other, many naturally reason the the prior event is causing the later event.  But as I've explained, just because an event occurs prior to another event doesn't mean it causes it.  

And why pick out one prior event out of the 6 months worth of other prior events?  And why ignore possible genetic and environmental causes?  Or why not say "well, my son got new shoes 6 months ago (prior event) therefore, new shoes cause autism"?  Until you can tease out all the variables, it's a huge stretch to attribute causation just because of temporal order.  

Constant Condition, Variable Condition, and Composite Cause
Ok, we're going to have to introduce a little bit of technical terminology to be able to distinguish between some important concepts.  I don't want to get too caught up in making the distinctions, I'm more concerned about you understanding what they are and (hopefully) the role they play in evaluating causal claims.

constant condition is a causal factor that must be present if an event is to occur.  Consider combustion.  In order for there to be combustion there must be oxygen present.  But oxygen on its own doesn't cause combustion.  There's oxygen all around us but people aren't continuously bursting into flames.  However, without oxygen there can be no combustion.  In the case of combustion, we would say that oxygen is a constant condition.  That is, it is necessary for the causal event to occur, but it isn't the thing that initiates the causal chain.

When we look at the element or variable that actually initiates a causal chain of events, we call it the variable condition.  In the case of combustion it might be a lit match, a spark from electrical wires, or exploding gunpowder from a gun.  There can be many variable conditions.  

The point is you can't start a fire without a spark.  This gun's for hire.  You can't start a fire without a spark.  Even if we're just dancing in the dark.   Of course, you could also start a fire with several other things.  That's why we call it the variable condition.  But despite all the possible variable conditions, there must be oxygen present...even if we're just dancing in the dark.

As you might expect, when we consider the constant and the variable condition together, we call it the composite cause.   Basically, we are recognizing that for causal events there are some conditions that must be in place across all variable conditions and there are some other conditions that have a direct causal effect but that could be "switched out" with other conditions (like different the sources of a spark).

Separating constant conditions for variable conditions can be useful in establishing policy.  For example, with nutrition  if we know that eating a certain type of diet can cause weight loss (and we want to lose weight) we can vary our diet's composition or quantity of calories (variable conditions) in order to lose weight.  The constant condition (that we will eat) we can't do much about.  

Conversely, we can't control the variable conditions that cause the rain, but by buying umbrellas we can control the constant condition that rain causes us to get wet.  (Water is wet.  That's science!)

The Argument Structure of a General Causal Claim
Someone claims X causes Y.  But how do we evaluate it?  To begin we can use some of the tools we already acquired when we learned how to evaluate generalizations.  To do this we can think of general causal claims as a special case of a generalization (i.e., one about a causal relationship).

I'm sure you all recall that to evaluate a generalization we ask

 (1) is the sample representative?  That is, (a) is it large enough to be statistically significant (b) is it free of bias (i.e., does it incorporate all the relevant sub-groups included in the group you are generalizing about.; 
(2) does X in the sample group really have the property Y (ie., the property of causing event Y to occur).

Once we've moved beyond these general evaluations we can look at specific elements in a general causal claim.  To evaluate the claim we have to look at the implied (but in good science explicit) argument structure that supports the main claim which are actually an expansion of (2) into further aspects of evaluation.  

A general causal claim has 4 implied premises.  Each one serves as an element to scrutanize.

Premise 1:  X is correlated with Y.  This means that there is some sort of relationship between event/object X and event/object Y, but it's too early to say it's causal.   Maybe it's temporal, maybe it's statistical, or maybe it's some other kind of relationship.  

For example, early germ theorist Koch suggested that we can determine if a disease is caused by micro-organisms if those micro-organisms are found on sick bodies and not on healthy bodies.  There was a strong correlation but not a necessary causal relation because for some diseases people can be carriers but immune to the disease.  

In other words, micro-organisms might be a constant condition in a disease causing sickness, but there may be other important variable causes (like environment or genetics) we must consider before we can say the a particular diseases micro-organisms cause sickness.

Premise 2:  The correlation between X and Y is not due to chance.  As we saw with the Uzbek milk prices and the NYSE, sometimes events can occur together but not have a causal relation--the world is full of wacky statistical relations.  Also we are hard-wired to infer causation when one event happens prior to another.  But as you now know, this would be committing the post hoc ergo proptor hoc fallacy.

Premise 3:   The correlation between X and Y is not due to some mutual cause Z.  Suppose someone thinks that "muscle soreness (X) causes muscle growth (Y)."  But this would be mistaken because it's actually exercising the muscle (Z) that causes both events.

In social psychology there was in interesting reinterpretation of a study that demonstrates this principle.  An earlier study showed a strong correlation between overall level of happiness and degree of participation in a religious institution.  The conclusion was that participation in a religious institution causes happiness.  

However, a subsequent study showed that there was a 3rd element (sense of belonging to a close-knit community) that explained the apparent relationship between happiness and religion.  Religious organizations are often close-knit communities so it only appeared as though it was the religious element that cause a higher happiness appraisal.  It turns out that there is a more general explanation of which participation in a religious organization is an instance. 

Premise 4:  Y is not the cause of X.   This issue is often very difficult to disentangle   This is known as trying to figure out the direction of the arrow of causation--and sometimes it can point both ways.  For instance, some people say that drug use causes criminal behaviour.  But in a recent discussion I had with a retired parole officer, he insists that it's the other way around.  He says that youths with a predisposition toward criminal behavior end up taking drugs only after they've entered a life of crime.  I think you could plausibly argue the arrow can point both directions depending on the person or maybe even within the same person (i.e., feedback loop).  There's probably some legitimate research on this matter beyond my musings and the anecdotes of one officer, but this should suffice to illustrate the principle. 

Conclusion:  X causes Y.

Premise 2, 3, and 4 are all about ruling out alternative explanations.  As critical thinkers evaluating or producing a causal argument, we need to seriously consider the plausibility of these alternative explanations.  Recall earlier in the semester we looked briefly at Popperian falsificationism.   We can extend this idea to causation:  i.e., we can never completely confirm a causal relationship, we can only eliminate competing explanations.

With that in mind, the implied premises in a general causal claim provide us a systematic way to evaluate the claim in pieces so we don't overlook anything important.  In other words, when you evaluate a general causal claim, you should do so by laying out the implied structure of the argument for the claim and evaluating them in turn. 

Monday, November 18, 2013

Things to Think About

http://opinionator.blogs.nytimes.com/2013/11/16/the-insanity-of-our-food-policy/?hp&rref=opinion&_r=1&

Homework 13A

Exercise 9B p. 240
(a), (b), (c)

Lecture Notes for Polling

Polling
Biggest Polling Fails in (US) History

Polling is a subset of generalizations so many of the rules for evaluation and analysis will be the same as in the previous section for generalizations.  Polling is a generalization about a specified population's beliefs or attitudes.  For example, during election campaigns, the populations in important "battleground" states are usually polled to find out what issues are important to them.  Upon hearing the results, the candidate will then remove what's left of his own spine and say whatever that population wants to hear.  (Meh!  Call me a cynic...)

Suppose I were to conduct a poll of UNLV students to determine their primary motivation for attending university.  To begin the evaluation of the poll we'd need to know 3 things:

(a)  The sample:  Who is in the sample (representativeness) and how big was that sample.
(b)  The population: What is the group I'm trying to make the generalization about.
(c)  The property in question:  What is that belief, attitude, or value I'm trying to attribute to the population.

Recall from the previous section that generalizations can be interpreted as having an (implicit or explicit) argument form.   Lets instantiate this argument structure with a hypothetical poll.  Suppose I want to poll UNLV students with the question, "should critical thinking 102 be a graduation requirement?"  Because I have finite time and energy I can't ask each student at the university.  Instead I'll take a sample and extrapolate from that.  My sample will be students in my class.

P1.  A sample of 36 students from my class is a representative sample of the general student population.
P2.  65% of the students in my class (i.e., the sample) said they agree that critical thinking 102 should be a graduation requirement.
C.  Therefore, we can conclude that around 65% of UNLV students think that critical thinking 102 should be a graduation requirement.

The are 2 broad categories of analysis we can apply to the poll results:

Sampling Errors
Questions about sampling errors apply to P1, which are basically: (a) is the sample size large enough to be representative of the group and (b) does the sample avoid any biases (i.e., does it avoid under or over representing one group over another in a way that isn't reflective of the general population).

Regarding sample size, national polls generally require a (representative) sample size of 1000, so we should expect that a poll about the UNLV population could be quite a bit less than that.  Aside from that, (a) is self explanatory and I've discussed it above, so lets look a little more closely at (b).

The question here is whether the students in my class accurately represent all important subgroups in the student population.  For example, is the sample representative of UNLVs general populations ratio of ethic groups, socio-economic groups, and majors?  You might find that there are other important subgroups that should be captured in a sample depending on the content of the poll.

Someone might plausibly argue that the sample isn't representative because it disproportionately represents students in their 1st and 2nd years.

We can ask a further question about how the group was chosen.  For example, if I make filling out the survey voluntary then there's a possibility of bias.  Why? Because it's possible that people who volunteer for such a survey have a strong opinion one way or another.  This means that the poll will capture only those with strong opinions (or  those who just generally like to give their opinion) but leave out the Joe-Schmo population who might not have strong feelings or might be too busy facebooking on their got-tam phone to bother to do the survey.

In order to protect again such sampling errors polls should engage in random sampling.  That means no matter what sub-group someone is in, they have an equal probability of being selected to do the survey. We can also take things to a whole.  nuva.  level.  when we use stratified sampling.  With stratified sampling we make sure a representative proportion of each subgroup is contained in the general sample.   For example, if I know that about 30% of students are 1st year students then I'll make sure that 30% of my sample randomly samples 1st year students.

Another thing to consider in sampling bias is margin of error.  The margin of error (e.g. +/-5%) measures the likelihood that the data collected is dependable.  Margin of error is important to consider when there is a small difference between competing results.  For example, suppose a survey says 46% of students think Ami should be burned at the stake while 50% say Ami should be hailed as the next messiah.  One might think this clearly shows Ami's well on his way to establishing a new religion but we'd be jumping the gun until we looked at the poll's margin of error.

Suppose the margin of error is +/- 5%.  This means that those that want to burn Ami at the stake could actually be up to 48.3% ((46x.05)+46) and those that want to make him the head of a new religion could be as low as 47.5% ((50x.05)+50).  Ami might have to wait a few more years for world domination.

As I mentioned in the beginning of this section, questions about sampling error are all directed at P1; i.e., is the sample representative of the general population about which the general claim will be made.  Next we will look at measurement errors which have to do with the second premise (i..e., that the people in the sample actually do have the believes/attitudes/properties attributed to them in the survey).

Measurement Errors
Measurement errors have to do with scrutinizing the claim that the sample population actually has the believes/attitudes/properties attributed to them in the survey.  Evaluating polls for measurement errors generally has to do with how the information was asked/collected, how the questions were worded, and the environmental conditions at the time of the poll.

As a starting point, when we are looking at polls that are about political issues, we should generally be skeptical of results--especially when polling agencies that are tied to a political party or ideologies produce competing poll results that conform with their respective positions.  In short, we should be alert to who is conducting the poll and consider whether there may be any biases.

One specific type of measurement error arises out of semantic ambiguity or vagueness. For example, suppose a survey asks if you drive "frequently".  This is a subjective term and could be interpreted differently.  For some people it might mean 1x a week, for others once a day.  A measurement error will be introduced into the data unless this vagueness is cleared up.  Because more people probably think of "frequent drinking" as being "more than what I personally drink", the results will be artificially low.  They also will not very meaningful because the responses don't mean the same thing.

Another type of measurement error arises when we consider the medium by which the questions are asked.  Psychology tells us that people are more likely to tell the truth when asked questions face to face and less so when asked over the phone.  Even less so when asked in groups  (groupthink).  These considerations will introduce measurement errors; that is, they will cast doubt on whether the members of the sample actually have the quality/view/belief being attributed to them.

When evaluation measurement accuracy we should also consider when and where the poll took place.  For example, if, during exam periode, students are asked whether they think school is stressful (generally), probably more will answer in the affirmative than if they are asked during the 1st week of the semester.  

Also, going back to our poll of students concerning the having critical thinking as a graduation requirement, we might argue that the timing is influencing the results.  The sample is taken from students currently taking the class.  Perhaps it's too early in their career to appreciate the course's value; yet if we asked students who had already taken the course and have had a chance to enjoy the glorious fruits of the class, the results might be different.

Finally, we should be alert to how second-hand reporting of polls can present the results in a distorted way.  Newspapers and media outlets want eyeballs, so they might over-emphasize certain aspects of the poll or interpret the results in a way that sensationalizes them.  In short, we should approach with a grain of salt polls that are reported second-hand.

To summarize:  For polling we want to evaluate (1) do the individuals in the sample actually have the values/attitudes/beliefs being attributed to them;  (2) is the sample free of (a) sampling errors and (b) sampling measurement errors.

Links for Lecture on General Causal Reasoning


1.  GRE and Philosophy
LSAT and Philosophy


2.  The Need for a Control:  Now can  you know if an intervention is working if you don't compare it to something similar?
http://www.wired.com/wiredscience/2013/11/jpal-randomized-trials/

3.  Isolating the cause (more need for a control).

4.  Direction of causation
http://www.hsph.harvard.edu/hicrc/firearms-research/guns-and-death/

5.  Causation and correlation:
http://www.ncbi.nlm.nih.gov/pubmed/24083600/
http://www.cdc.gov/sids/

6.  Causation and correlation:

7.  How are the results being reported? What is being measured? Direction of Causation?
Sampling of headlines: https://www.google.com/search?q=study+shows+effects+of+reading+classical+literature+&oq=study+shows+effects+of+reading+classical+literature+&aqs=chrome..69i57.27192j0j4&sourceid=chrome&espv=210&es_sm=91&ie=UTF-8

The actual study: http://www.sciencemag.org/content/342/6156/377.abstract

8.  MMR vaccines
To know if vaccines cause autism, what would you need to know?
What about risk of anaphylaxis?

Post-Wakefeild study measles case in UK: http://news.bbc.co.uk/2/hi/uk_news/england/5081286.stm
In the US: http://theness.com/neurologicablog/index.php/measles-outbreak-thanks-jenny/

9. Knee surgery NYT
Knee surgery WSJ

10. Causation/Falsificationism

11. Novella on Acupunture

Sunday, November 17, 2013

Links for Lecture on Polling


1.  How you ask the question and how the audience interprets the question can affect the results of a poll.

2.  There can be a difference in how the results of a poll are reported and the quality or property that is actually being measured by the poll.  Also, how are the target groups defined?
http://online.wsj.com/article/PR-CO-20131023-913504.html