Monday, December 2, 2013

Arguments From Analogy

Arguments From Analogy
An argument from analogy is when we draw a conclusion based on comparing one thing to another.  Arguments from analogy are often used (but not always) to argue about a complex or poorly understood subject by comparing it to one that is less complex or better understood by the audience.  They are also often used to argue for a conclusion in a controversial case based on what is accepted in an uncontroversial case by claiming the characteristics of the cases are relevantly the same. 

One of the most popular analogies is between minds/brains and computers.  (As an aside, it is only an argument from analogy if a conclusion is drawn.  Sometime analogies are used merely as explanatory aids, not as vehicles for an argument.)

Perhaps the most famous argument from analogy is the the Argument from Design for the existence of God/gods.  There are many versions of this argument, but to give us a template, here's one:

P1.  The mechanics and inner workings of a watch are so mechanically complicated that they must have been designed by an intelligent being.  

P2.   Watches have a purpose so they must have been designed by an intelligent being.
C1.  As with a watch, life (or a particular organ or organism) is complex and has a purpose therefore, this necessitates that they had an intelligent designer.
HMC.  Therefore, intelligent God/gods exist.

Lets look at the underlying formal structure of an argument from analogy and show how to evaluate an argument from analogy within the context of this famous example. 

The formal structure of an argument from analogy looks like this:

P1.  Object 1 (or Set of Objects 1) and Object 2 both have properties p, q, r...
P2.  Object 1 (or Set of Objects 1) has property z.
HP*:  Properties p, q, r... are relevant to an object having property z
C.   Since Object 1 and Object 2 share properties p, q, and r and Object 1 has property z, then Object 2 must also have property z. 

We can also do a simplified version of the formal structure (in your argument evaluations, feel free to use either):

P1.  Object/situation 1 and object/situation 2 are alike in that they both have properties p, q, r.
P2.  Since we agree that object/situation 1 has property z (e.g., it is good/bad, legal/illegal, should do it/shouldn't do it).
C.  Therefore, since object/situation 2 has the same relevant properties as object/situation 1, we should apply the same judgment to object/situation 2 (i.e., it has property z).

We can more explicitly formalize the argument from design to see how the argument from analogy works.

P1:  Known complicated things such as a watch have the property of having being designed by something intelligent.
P2:  Known complicated things (e.g.,  a watch, a computer)  have the property of "purposefulness" and therefore have the additional property of having being designed by an intelligent designer. 
P3:  Life or individual organs or organisms have the properties of complexity and have purposes.
HP*:  The properties of complexity and purposefulness are relevant to an object having the property of having a designer. 
C1:  Therefore, life also has the property of being designed by something intelligent,
MC:  That intelligent thing must be God/gods, therefore God/gods exist. 

Evaluating Arguments from Analogy
When evaluating arguments from analogy, most of our attention will be on the hidden premise, that having properties p, q, r are relevant to having property z.  To see how this works lets turn to the argument from design:  The most famous refutation of this argument comes from Hume in the 18th Century.  Hume gave 6 main criticism of the argument most of which are in some way related to evaluating the hidden premise.  We'll look at a few of them. 

We should now ask whether the property of complexity is necessarily relevant to also having the property of having an intelligent designer.  One way to approach this is to look for counter-examples;  that is, examples of things that are complex but (to our best knowledge) don't have a creator.   Are snow-flakes intelligently designed? (Is someone up in the sky furiously making them every time it snows?)  What about complex cloud patterns?  What about those swirls in your coffee?  It seems, we can have complexity without a conscious intelligent designer.

The question of purposefulness is a separate one.  I will simply note that it will take substantial argument to show that each life was designed for some sort of cosmic purpose.  

So, in a nutshell the counter to an argument from analogy hinges on showing that having one set of  properties (p, q, r, z) doesn't mean that every object with properties p, q, and r will also necessarily have property z. 

In the case of the argument from design, another major flaw is that there is a disanalogy between inanimate objects which are unable to pass on complexity and living organisms which are able to reproduce and pass on complexity (and possibly become more complex over time).  Disanalogies arise when we show that the properties under consideration (complexity and purposefulness) aren't necessarily relevant to having some other property (intelligent designer). 

Lets look at one more example of an argument from analogy (intentionally bad--do not try this at home!) 
P1.  Water is clear, liquid, and quenches my thirst.
P2.  Gasoline is also, clear and liquid. 
HP.  The properties clearness and liquidity are relevant to an object having the property of "thirst quenchability."
C.  Therefore, drinking gasoline will quench my thirst.  

If you've been paying attention you might be able to figure out what's gone wrong... The problem is with the hidden premise.  There is a disanalogy between water and gasoline because clearness and liquidity are not necessarily relevant to something having the property of being able to quench thirst.  

In the policial sphere we often see arguments from analogy with concern to gun policy.  On the anti-gun control side we see analogies with policy in Switzerland (which has lax restrictions on what types of guns can be owned).  On the pro-gun control side we often hear analogies with policy in Australia (which banned assault weapons).  The rebuttals to both arguments from analogy often involve claims to the effect that there are cultural elements that aren't relevantly similar between the US and the other country.  



List of Common Fallacies

Here's a list of common logical fallacies.  Some of which we discussed in class.  We'll discuss more in the future.  Here's a link to a handy fallacy guide with explanations.  Wikipedia is also a good source for explanations of the various fallacies as well as the cognitive biases.

Naturalistic Fallacy (3 types)  (a) Natural is better, (b) that's the natural order of things (i.e. for moral arguments), (c) ascribing natural properties not non-natural things.
Argument from authority/antiquity/ancient wisdom/tradition
Ad populum
Appeal to emotions (usually pity, sympathy, fear, guilt, or disgust)
Genetic fallacy
Argument from personal incredulity
Ad hominem
Poisoning the well
Tu quo que
Ad hoc rescue
Moving the goal posts
False dilemma
Perfectionist fallacy/negative confirming instances: sub-category of false dilemma
Post hoc ergo proptor hoc (confusing correlation with causation)
False premise
Non-falsifiable hypothesis
Subjectivist fallacy
Hasty generalization
Red herring
Strawman
Non-sequitur
Begging the question
Texas sharp shooter fallacy (also considered a cognitive bias)
Appeal to force
Two wrongs fallacy
Confirming instances fallacy (related to confirmation bias)
Argument from Ignorance/inappropriate burden of proof/demanding proof a negative


Biases and Cognitive Biases:
Appeal to anecdote (also could be categorized as a fallacy)
Confirmation Bias
Selection Bias
Motivated Reasoning
Negativity Bias
Halo effect
Bottom line effect

Sunday, December 1, 2013

Arguments from Analogy: Lecture Links







Science is just another religion.  [Implied conclusion?]

"America should declare war on Syria now.  We didn't stand and let Hitler do what he pleased in WWII, did we?"

Taxation is like slavery.

Argument from Design

  • 1. Every time I have encountered a complex machine, it has been the result
  • of an intelligent creator.
  • 2. Similar effects prove similar causes.
  • 3. The universe is similar to a complex machine.
  • 4. Therefore, the universe is a result of an intelligent creator. 
Religious liberty, employer health care plans, birth control

Orca and Danger/Stress:


Wednesday, November 27, 2013

Tuesday, November 26, 2013

Links for Slippery Slope Lecture

                                                                                                           at 3:34



Bill O' Reilly Slippery Slope

Slippery Slope Arguments

Introduction
In the previous posts we looked at argument schemes that are typically employed in factual matters: generalizations, polls, general causal reasoning, particular causal reasoning, and the argument from ignorance.  In this next section we'll look at common argument schemes used in normative (i.e., having to do with values) arguments.  Check.  it. aus...

Slippery-Slope Argument
A slippery slope argument is one where it is proposed that an initial action will initiate a causal cascade ending with a state of affairs that is universally desirable or undesirable.  The implication is that we should (or should not) do the initial action/policy because the cascade of events will necessarily occur.  

A contemporary example of a negative version of the slippery slope argument comes from arguments against gay marriage equality.  Some opponents argue that if same sex couples are allowed to marry, then there will eventually be no good reasons against people marrying animals and so society will have to permit this too. 

(Yes, people actually make this argument...can I marry my horse? )
Here's a good clip were the slippery slope argument is mentioned explicitly:  Video of O'Reilly factor slippery slope argument. Start at 2:00

A positive version of the slippery slope argument might be something like a Libertarian argument (over-simplified version):  We should treat the principle of self-ownership as the primary governing principle, if we do, then you will remove taxation and government, then the market will cease to be distorted and people will act in their own self-interest, and people acting in their own self-interest will pull themselves up by their own bootstraps, a society of self-pulled up people will have relatively few social problems thus eliminating many of the existing ones. (Note:  We could do an over-simplified version of just about any political philosophy and show it to be weak)



So, why are these arguments not very strong?  To figure it out, lets look at the underlying structure of a slippery slope argument.  Recall that a slippery slope argument is one where it is supposed that one initial event or policy will set off an necessary unbroken sequence of causal events. 

If we formalize it, it will look like this:  

P1:  If A then B.  
P2:  If B, then C.  
P3:  If C then D.  
P4:  If D, then E.  
P5:  If E, then F. 
P6:  (So, if A then F)
C   F is a good thing, therefore we should do A. (Positive conclusion).
C* F is a bad thing, therefore we should't do A.  (Negative conclusion). 

(We can also condense P1-P5 as a single compound premise: If A then B, if B then C, If C then D,...)

If we think back to the lecture/post on principles of general causal reasoning we will recall that it takes quite a bit of evidence to say that even a single causal argument is good (e.g. If A then B).  As you might imagine, the longer the causal chain gets, the more difficult it will become to ascertain that the links along the way are necessarily true and not open to other possible outcomes.  

Returning to our examples, in the first case, one of the causal elements has to do with the equivalence between reasons against gay marriage and reasons against animal marriage.  It doesn't take much imagination to come up arguments for why the two types of prohibitive reasons aren't the same (capacity for mutual informed consent for starters...).  Showing that there is a relevant distinction between the types of reasons causes a break in the causal chain, thereby rendering the argument weak.  

In our-simplified version of the Libertarian argument relies on a long causal chain that begins with the primacy of the self ownership principle and reduced taxation and government and ends with a decrease in prevailing social problems.  Along the way there are may suspect causal claims that individually might not stand too much scrutiny--especially since many of the claims are hotly debated by experts in the respective fields.  Since a chain is only as strong as its weakest link, this will have a detrimental effect on the overall strength (logical force) of the conclusion. 

Upshot:  So, what's the overall status of slippery slope arguments?  Just like many argument schemes they can be both strong or weak depending on their constituent parts.  In the case of slippery slope arguments, a strong one will have highly plausible causal claims all linked together, culminating in a glorious well-supported conclusion about what we should or should not do.   Conversely, a weak slippery slope argument will have one or more weak causal claims in its implied premises.

Links from Lecture on Arguments from Ignorance




Link to Unexplained Escape video

Mumbai Weeping statue
Miracles graph



Pumapunku (Arg from personal incredulity, false dichotomy, arg from ignorance, arg from unqualified authority)
3:20-4:50, 5:10-5:35, 6:10-6:35, 7:30-8:00, 8:40, 12:20, 23:20-24:10
25:35 (gateway) 27:27-28:20

Puma punku AA debunk


Arguments from Ignorance

Introduction
The next argument scheme we will look at is what's known as the argument from ignorance.  An argument from ignorance (or argumentum ad ignorantium if you want to be fancy) is one that asserts that something is (most likely) true because there is no good evidence showing that it is false.  It can also be used the other way to argue that a claim is (most likely) false because there's not good evidence to show that it's true.

Lets look at a couple of (valid) examples:

There's no good evidence to show that the ancient Egyptians did have digital computers.  (This evaluation comes from professional archeologists), therefore, they likely didn't have digital computers.

Or

There's no good evidence to suppose the earth will get destroyed by an asteroid tomorrow.  (This evaluation comes from professional astronomers),  so we should assume it won't and plan a picnic for tomorrow.

Or

There's no good geological evidence that there was a world-wide flood event.  (This evaluation comes from professional geologists); therefore, we should assume that one never happened.

Formalizing the Argument Scheme
As you may have guessed, we can formalize the structure of the argument from ignorance:

P1:  There's no (good) evidence to disprove (or prove*) the claim.
P2:  There has been a reasonable search for the relevant evidence by whomever is qualified to do so.
C:    Therefore, we should accept the claim as more probable than not/true.
C*:  Therefore, we should reject the claim as improbable/false.

Good and Bad Use of Argument from Ignorance
The argument from ignorance is philosophically interesting because sometimes the same structure can be used to support the opposing position.  The classic example is the debate over the existence of God.  Lets look at how both sides can employ the argument from ignorance to try to support their respective position.

Pro-God Arg 
P1:  You can't find any evidence that proves that God or gods don't exist.
P2:  We've made a good attempt to find disconfirming evidence, but can't find any!
C:   Therefore, it's reasonable to suppose that God or gods do exist.

Vs God Arg 
P1:  You can't show any evidence that God or gods do exist.
P1*:  Any evidence you present can also be explained through the natural laws.
P2:  We've made a good attempt at looking for evidence of God's/gods' existence but can't find any! (I even looked under my bed!)
C:  Therefore, it's reasonable to suppose that God/gods don't exist.

This particular case brings out some important issues we studied earlier in the course such as bias and burden of proof.   Not surprisingly, theists will find the first argument convincing while atheists will be convinced by the latter.  This of course brings up questions of burden of proof.  When we make a claim for something's existence, is it up to the person making the claim to provide proof?  Or does the burden of proof fall on the critic to give disconfirming evidence?  In certain questions, your biases will pre-determine your answer.

While in the above issue, there is arguably reasonable disagreement on both sides, there are other domains where the argument from ignorance fails as a good argument.  As you might guess, this will have to do with the acceptability of P1 (i.e., there is/is no evidence) and P2 (i.e., a reasonable search has been made).  Most criticism of arguments from ignorance will focus on P2--that the search wasn't as extensive as the arguer thinks.  Generally, we let P1 stand because it is usually an authors opinion to the best of their own knowledge.  Recall from the chapter on determining what is reasonable, we typically let personal testimony stand.

We can illustrate a poor example of an argument from ignorance with an example.  Claim: There's no evidence to show Obama is American, therefore he isn't American.

Lets dress the argument to evaluate it:
P1:  I've encountered no good evidence to show that Obama is an American citizen.
P2:  Numerous agencies and individual trained in the search and identification of state documents have been unable to locate any relevant documents.
C:   Obama isn't American (and is a Communist Muslim).

Regarding P1, maybe the arguer hasn't encountered any evidence so we'll leave it alone.  P2 however has problems.  There have been reasonable searches for evidence, and that evidence was found.  Perhaps, the arguer was unaware or didn't truly exert him/herself enough.  The argument fails because P2 is not acceptable (i.e., false).

We can also typically find the argument from ignorance used in arguments against new (or relatively new) technologies in regards to safety or efficacy.  For example:

We should ban GMOs because we don't know what long-term health effects are.

Dressed:
P1:  I've found no evidence that shows that GMOs are safe for human consumption.
P2:  Those qualified to do studies and evaluate evidence have found no compelling evidence to show that GMOs are safe for human consumption.
C:  Therefore, we should assume GMOs are unsafe and ban them until we can determine they are safe.

If we were to criticize this argument we'd consider P2.  In fact, there have been quite a few long term studies done by those qualified to assess safety.  At this point we will have a debate over quality of evidence.  Some on the anti-GMO side dispute the quality of the evidence (i.e., it was funded by company x, and therefore it is questionable).  In a full analysis we'd consider this question in depth, but for our purposes here, we might legitimately challenge the claim that there is no available evidence purporting to demonstrate safety.

As an aside, notice that we can also use the argument from ignorance for the opposite conclusion:  There's no compelling evidence to show that GMOs are unsafe for human consumption in the long-term, therefore, we should continue to make them available/ should not regulate them.

The "team" that wins this battle of arguments from ignorance will have much to do with our evaluation of P2:  That there legitimately is or isn't quality evidence one way or the other.

Final Notes on Arguments from Ignorance
We can look at arguments from ignorance as probabilistic arguments.  That is, given that there is little or no evidence for something, what is the likelihood that it still might exist?  This is especially true for claims that something does exist based on an absence of evidence for its non-existence.   However, as Carl Sagan famously said, "absence of evidence is not evidence of absence."  In other words, just because we can't find evidence for something, doesn't mean that the thing or phenomena doesn't exist.

On the flip side, this line of argument can also be used to support improbable claims.  Consider such an argument for the existence of unicorns or small teapots that circle the Sun:  There's no positive evidence that unicorns don't exist or small tea pots don't circle the Sun, therefore we should assume they exist.

At this point we should return to the notion of probability:  Given no positive evidence for these claims, what is the probability that they are true (versus the probability that they aren't)?  It seems that, given an absence of evidence, the probability of there being unicorns is lower than the probability that they do not exist.  Same goes for the teapot.

Typically, in such cases we say that the burden of proof falls on the person making the existential claim.  That is, if you want to claim that something exists, the burden is upon you to provide evidence for it, otherwise, the reasonable position is the "null hypothesis."  The null hypothesis just means that we assume no entity or phenomena exists unless there is positive evidence for its existence.   In other words, if I want to assert that unicorns exist, using the argument from ignorance won't do.  It's not enough for me to make the claim based on an absence of evidence.  This is because, we'd expect some evidence to have turned up by now if there were unicorns (i.e., P2 of the implied argument would be weak).

This brings us to another Carl Sagan quote (paraphrasing Hume):  "Extraordinary claims require extraordinary evidence." Or as Hume originally said:  "A wise man proportions his beliefs to the evidence."   Claiming that unicorns exist is an extraordinary claim and so we should demand evidence in proportion to the "extraordinariness" of the claim.  This is why an ad ignorantium argument fails here;  it doesn't offer any positive evidence for an extraordinary claim, only absence of evidence.  We'll discuss this principle of proportionality more in the coming section.   For now, just keep it in mind when evaluating existential arguments from ignorance.

Monday, November 25, 2013

HW 14A

Ex 10C 1 and 3

The Scientific Method Lecture Notes


Introduction to the Scientific Method in the Context of Critical Thinking


(For an example of real science in action, watch the video)
In the last few lessons we've looked at 5 common argument schemes:  Generalizations, polling, general causal reasoning, particular causal reasoning, and arguments from ignorance.  As luck would have it, these are the most common argument schemes you will find in (good and bad) scientific arguments.  Arguments are important to the scientific enterprise because a core activity of science is to provide reasons and evidence (i.e., arguments) for why one hypothesis should be accepted over another.  This question of why we should chose one hypothesis over another (or any hypothesis at all) brings up many interesting philosophical issues which (time permitting) we will briefly explore.  However, before putting on our philosopher hats, lets put on our lab coats, turn on our bunsen burners and take a closer look at the scientific method.


We can break up the scientific method into 5 steps:

Step 1:  Understanding the Issue
In this first step, the goal is simply to determine what it is exactly that we want to know.  Usually, it will be a problem that we want solved.  Examples might include, what is the mass of an electron?  Can vaccines prevent measles?  Can Tibetan monks levitate?  Is the earth round?  Can wi-fi cause health problems?  Does the color red make people feel hungry? How do magnets work?  Does honey diminish the severity of coughs?

As you can see some of these issues will involve questions about causation while others might be about identifying something's properties.


Step 2:  Formulating a Hypothesis
In the next step, we want to formulate a hypothesis that will solve our problem and the hypothesis must be testable (recall that a non-testable hypothesis is non-falsifiable and thus considered pseudo-scientific).  

To illustrate how this works, lets consider the problem of whether honey diminishes the severity of coughs.  Our basic hypothesis will be "honey diminishes the severity of coughs."

However, often our hypothesis will extend beyond a simple "yes" or "no".  We will want to know why it does or doesn't have a particular effect on a cough.  This is known as the "causal mechanism";  i.e., the thing that causes the effect that our hypothesis anticipates.   So, if honey diminishes the severity of coughs, we will want to know why.  If we don't know why then it may simply be correlation.  We are trying to establish causation.  Maybe it's the tea we drink the honey with that causes the diminished severity.  Or maybe it isn't honey itself that causes the reduced severity  maybe it's the sugars in honey and so any sweet substance will do.

Part of establishing causation is to rule out competing hypothesis.  So, if someone says that honey diminishes the severity of coughs because the sweetness in honey activates some particular receptor cells that in turn help diminish the severity of the cough, then we can test that.  Someone else might say it's because the honey reduces swelling in the throat.   We can test that a too.  Or someone else might say honey has some anti-bacterial or anti-viral compounds which kill the bacterial/viral cause of the cough.

The point is, we need to pick a hypothesis that (preferably) is also specific enough to also include a causal mechanism.  Lets choose the first one.

Hypothesis (h):  Drinking honey can reduce the severity of coughs.
Causal Mechanism:  h because

"the close anatomic relationship between the sensory nerve fibers that initiate cough and the gustatory nerve fibers that taste sweetness, an interaction between these fibers may produce an antitussive effect of sweet substances via a central nervous system mechanism."

Fallacy Alert!  Aruga!  Aruga!  In scientific debates it's very important to hold your opponent to their hypothesis (and also to keep to yours when facing objections or contravening evidence).  Changing the hypothesis mid-debate is called moving the goal posts.  This is a very common practice among purveyors of pseudo-science or members of the anti-science ideologies.  


For example, for years anti-vax groups opposed vaccines because--they hypothesized--thimerisol causes autism.   Because this myth became so pervasive (despite overwhelming evidence to the contrary) and in order to ensure compliance rates high enough for herd immunity, many national health departments changed to the more expensive thimerisol-free versions of the vaccines.  Contrary to the anti-vax hypothesis, removal of thimerisol from vaccines was followed by autism rates actually going up rather than down! (There's some weak evidence to suggest that vaccines can actually inhibit some kinds of autism).

Now that thimerisol is removed from vaccines and the anti-vax hypothesis has been proven to be empirically false, what do you think the response of the anti-vax crowd is?  If you guessed, "oh, lets support vaccines now," you were sleeping for the 2 weeks of this class!  The response was to "move the goal posts."  Now it's "too many too soon!" or "it's got aluminum in it!" or "it's got mercury in it!".


Step 3:  Identifying the Implications of the Hypothesis
In the next step we need to set out our expectations for what we'd expect to see (i.e., observations) if your hypothesis is correct.  It's very important that this is done before the experiments are conducted.  In the case of the honey, we'd expect to see that (a statistically significant number of) people who have a cough will cough less frequently and violently then a comparable group of people with a cough but who don't take honey (or any other "medicine").  In the case of thimerisol, we might say, if it's true that thimersol causes autism, then when we remove thimersol from vaccines we should expect to see autism rates decline. 

We can formalize this structure

If the hypothesis (h) is true, then x will occur.  (x is our expected observable outcome).  

So, in the case of honey, if our hypothesis is true, then those who drink honey will have reduced severity of coughing compared to a control group.

Step 4:  Testing the Hypothesis
As you might expect, once we've set up our hypothesis and established the anticipated observable effects that would confirm the hypothesis, we test!

Recall step 2: when we form the hypothesis, we should ensure that the hypothesis is testable.  That is to say, that we can say in advance what will constitute observable confirmation or disconfirmation of the test.  A couple of notes on why we must do this in advance.  (1).  This prevents retrofitting the data to fit the hypothesis; (2).  if prevents the "moving of the goal posts".

Testing in Principle vs Testing in Practice
Finally, we should be aware that not all hypotheses will in practice be testable, but they must be so in principle.  For example, we can construct a hypothesis of what will happen if a large asteroid hits the earth but we don't need to actually destroy half the earth to confirm the hypothesis that such an impact will indeed destroy half the earth.  In some cases, running a computer simulation will do! 


Step 5:  Reevaluating the Hypothesis 
In step 4, I emphasized that the predicted confirmatory results of the hypothesis must be made in advance to avoid retrofitting and moving the goal posts.  However, this does not mean that once we have conducted a test that we can't modify the test or the hypothesis.  This is perfectly legitimate  but must be done in a way that recognizes the shortcomings of the original test and/or hypothesis.  

Fallacy Alert!  Aruga!  Aruga!  Aruga! When the implications of our hypothesis are confirmed we must be careful not to immediately conclude that our hypothesis is confirmed.  From the fact that our anticipated effect occurred it doesn't necessarily follow that our hypothesis is true.  This is called the fallacy of affirming the consequent, which looks like this:  

P1  If h, then x.  (in fancy talk, h is called the anticedent and x is called the consequent)

P2  x occurred.
C   Therefore, h is true. 

To see why h doesn't necessarily follow, given that P2 is true (i.e., "affirming" the consequent), consider the follow case.

P1  If it's raining, it's cloudy.

P2  It's cloudy.
C   Therefore, it's raining.

Just because it's cloudy doesn't mean it's raining.  It can be cloudy without it being rainy.  It can also be partially cloudy with chances of sunshine in the evening, followed by overcast skies at night... you get the point.  

In relation to scientific hypothesis we can imagine the following scenarios:  Someone suggests a hypothesis h and anticipates a certain observable consequence x.  But does it follow that just because x occurred that the hypothesis is true?  Nope.  There are many possible alternative reasons (or causes) besides h for which x might have occurred.  

If we think back to the sections on general causal reasoning we can see why.  If the hypothesis is a causal one, then there are several steps we need to go through before we can attribute causality.  Maybe there's only a statistical relationship between two variables?  (correlation) Maybe, there's some other better explanation (h) for why x is occurring?  Maybe the methodology was flawed.  (No double blinding=placebo effect, problems with representativeness and sample size, etc...).

Summary:  Steps of the Scientific Method
1.  Understand the Problem that requires a solution or explanation.
2.  Formulate a hypothesis to address the problem. 
3.  Deduce the (observable) consequences that will follow if the hypothesis is correct. 
4.  Test the hypothesis to see if the consequences do indeed follow.
5.  Reevaluate (and possibly reformulate) the hypothesis. 

Links for Lecture on the Scientific Method



The Test and Explanation


The Skeptic
Physics:

The Danish Study:

SIDs
SIDs Study


Novella's analysis of the Danish study

Incidence rates of various communicable diseases pre and post vaccine

Novel Prediction:
self-tickling


Faith And Healing:  As reported
The study abstract

Subway Bread:
http://www.alternet.org/food/500-other-foods-besides-subway-sandwich-bread-containing-yoga-mat-chemical

Quantity Matters: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1770067/

Placebo

No-cebo

Friday, November 22, 2013

Take-Home Final Due by Dec. 11 at Midnight

General Instructions for the Group Work/Take-Home Final
(a)  Don't foolishly give away marks.  Please read and follow instructions carefully.  There will be no second chances to resubmit omitted questions.
(b)  The due date is December 11 at 11:59:59pm.
(c)  Please submit only one assignment per group.  Be sure to decide amongst yourselves who the "submitter" will be to avoid complications.  Submit it to my gmail address.
(d)  Read and follow the instructions carefully.
(e)  Barring extenuating circumstances, please submit the assignment via email.  I will confirm its receipt.  (If submitted before the deadline, I'll confirm as soon as I get the email).  If you do not get a confirmation by noon on Thursday Dec 12, please contact me immediately.
(f)  Read and follow the instructions carefully.
(g)  Don't forget to do section VI (Peer review).
(h)  Have fun.

Part I:
(a) Identify the implied argument (both premises and conclusion) and put it into standard form; (b) evaluate the argument by looking at (i) premise acceptability, (ii)  premise relevance, (iii) sufficiency; (d) what logical fallacy is being committed? (e) what additional information would have to be provided in order to make the argument strong?




Part II:  Arguments from Analogy
(a) Evaluate the argument as we have done in class for arguments using this scheme (b) what are the claimed similarities and how relevant are they, (c) what are the differences and how relevant are they, (d) based on your evaluation, how strong is the argument?

Argument 1 (This is a meme, not an actual study)
An economics professor at Texas Tech said he had never failed a single student before but had, once, failed an entire class. The class had insisted that socialism worked and that no one would be poor and no one would be rich, a great equalizer. The professor then said ok, we will have an experiment in this class on socialism. All grades would be averaged and everyone would receive the same grade so no one would fail and no one would receive an A.

After the first test the grades were averaged and everyone got a B. The students who studied hard were upset and the students who studied little were happy. But, as the second test rolled around, the students who studied little had studied even less and the ones who studied hard decided they wanted a free ride too; so they studied little ...

The second Test average was a D! No one was happy. When the 3rd test rolled around the average was an F. The scores never increased as bickering, blame, name calling all resulted in hard feelings and no one would study for anyone else. All failed to their great surprise and the professor told them that socialism would ultimately fail because the harder to succeed the greater the reward but when a government takes all the reward away; no one will try or succeed.

Argument 2
The federal budget is just like a family budget, and we in government must tight our belts and live within our means just like families do.

Part III:  Critical Thinking Smackdown
Part A
Choose two of the following articles and (a) identify as many instances you can of logical fallacies, poor reasoning, poor scientific reasoning, empirically dubious assertions, and issues of burden of proof/proportionality; (b) give a brief one or two sentence justification of your assessment; and (c) suggest for each case what additional information would have to be included to remedy the problem.

Cinnamon and Honey
chemtrails
Tumeric

Part B
From the comments sections of the articles find at least one instance of (a) the post hoc ergo proptor hoc fallacy, (b) the naturalistic fallacy, (c) an illegitimate argument from authority, and (d) one more fallacy or instance of poor reasoning of your choice.

Part IV: Identify that Argument Scheme!
(a) Identify the argument scheme and rewrite the argument in its standard form (b)  explain why these instances fail as good instances of the argument scheme, and (c) suggest additional information could be included to strengthen the claim (where possible).

(1)  There's no good evidence to show that aspartame is safe for human consumption, therefore we shouldn't consume it.

(2)   82 percent of people with tattoos prefer hot weather over cold, compared with 63 percent of people in general.  Therefore, preferring hot weather causes people to get tattoos.  Based on a survey of 114 people with tattoos and 579 people in general.

(3)  I get headaches after drinking diet soda, therefore aspartame causes headaches.

(4)  About 1/2 the students I know receive government financial aid, therefore it's reasonable to conclude that 1/2 the students at UNLV receive government financial aid and are socialists.

(5)  There are several cases where multiple people have sighted an object in the sky which they couldn't identify, therefore those objects are alien space crafts.

(6)  All my philosophy professors think Aristotle is better than Plato, therefore most philosophy professors must prefer Aristotle to Plato.

Part V:  Cognitive Biases
Watch this video starting at about 3:15 to 7:55.  Name at least 3 cognitive biases to explain how the Third Eagle of the Apocalypse might have gone astray in his reasoning.  Give a couple of specific instances of each those cognitive biases from the video.




Part VI:  Bonus Question (worth up to and additional 5% of your score)
Do the following analysis for the meme below:  (a)  rewrite the argument in its appropriate scheme, (b)  fact check the statistics,  (c)  investigate approximately how much is spent on anti-terrorism, (d)  taking the facts you have uncovered into consideration, (i) evaluate the argument as you would for any argument of this type and (ii) employing the principle of charity, assess how strongly is the conclusion supported.  (Note: I haven't formed my own opinion yet, so I'm genuinely curious what you guys think.)





Part VII:  Peer Evaluation
Every student, in a separate private email, must submit an evaluation for each of your group members contribution to the group project.

According to the following criteria rank each group member on a scale of 1 to 4, where 1=strongly disagree, 2=disagree, 3=agree, 4=strongly agree.

(A) Attends group meetings regularly and arrives on time.
(B) Contributes meaningfully to group discussions.
(C) Completes group assignments on time.
(D) Prepares work in a quality manner.
(E) Demonstrates a cooperative and supportive attitude.
(F) Contributes significantly to the success of the project.

If your peers give you an average evaluation score that is less than 3 received for the assignment, your score will be minimum a full letter grade less than the group's score, possibly even less.



Wednesday, November 20, 2013

HW 13B

p. 246 Ex. 9C
(b)-(e)

Be sure to put the causal claims into standard form before analyzing them.
(P1)  There is a correlation between X and Y.
(P2)  The correlation isn't due to chance.
(P3)  The correlation between X and Y isn't due to some mutual cause or other cause.
(P4)  Y is not the cause of X.
(C)    X causes Y.

General Causal Reasoning

Introduction
Being able to separate correlation from causation is the cornerstone of good science.  Many errors in reasoning can be distilled to this mistake.  Let me preempt this section by saying that making this distinction is by no means a simple matter, and much ink has been spilled over the issue of whether it's even possible in some cases.  However, just because there are some instances where the distinction is indiscernible or difficult to make doesn't mean we should make a (poor) generalization and conclude that all instances are indiscernible or difficult to make.  

We can think of general causal reasoning as a sub-species of generalizations.  For instance, we might say that low-carb diets cause weight loss.  That is to say, diets that are lower in the proportion of carbohydrate calories than other diets will have the effect of weight loss on any individual on that diet.  Of course, we probably can't test every single possible low-carb diet, but give a reasonable sample size we might make this causal generalization. 

A poor causal argument is called the fallacy of confusing causation for correlation or just the causation-correlation fallacy.  Basically this is when we observe that two events occur together either statistically or temporally and so attribute to them a causal relationship.  But just because to events occur together doesn't necessarily imply that there is a causal relationship.

To illustrate:  the rise and fall of milk prices in Uzbekistan closely mirrors the rise and fall of the NYSE (it's a fact!).  But we wouldn't say that the rise and fall of Uzbeki milk prices causes NYSE to rise and fall, nor would we make the claim the other way around.  We might plausibly argue that there is a weak correlation between the NYSE index and the price of milk in Uzbekistan, but it would take quite a bit of work to demonstrate a causal relationship.

Here are a couple of interesting examples:
Strange but true statistical correlations


A more interesting example can be found in the anti-vaccine movement.  This example is an instance of the logical fallacy called "post hoc ergo proptor hoc" (after therefore because of) which is a subspecies of the correlation/causation fallacy.  Just because an event regularly occurs after another doesn't mean that the first event is causing the second.  When I eat, I eat my salad first, then my protein, but my salad doesn't cause me to eat my protein. 

Symptoms of autism become apparent about 6 month after the time a child gets their MMR vaccine.  Because one event occurs after the other, many naturally reason the the prior event is causing the later event.  But as I've explained, just because an event occurs prior to another event doesn't mean it causes it.  

And why pick out one prior event out of the 6 months worth of other prior events?  And why ignore possible genetic and environmental causes?  Or why not say "well, my son got new shoes 6 months ago (prior event) therefore, new shoes cause autism"?  Until you can tease out all the variables, it's a huge stretch to attribute causation just because of temporal order.  

Constant Condition, Variable Condition, and Composite Cause
Ok, we're going to have to introduce a little bit of technical terminology to be able to distinguish between some important concepts.  I don't want to get too caught up in making the distinctions, I'm more concerned about you understanding what they are and (hopefully) the role they play in evaluating causal claims.

constant condition is a causal factor that must be present if an event is to occur.  Consider combustion.  In order for there to be combustion there must be oxygen present.  But oxygen on its own doesn't cause combustion.  There's oxygen all around us but people aren't continuously bursting into flames.  However, without oxygen there can be no combustion.  In the case of combustion, we would say that oxygen is a constant condition.  That is, it is necessary for the causal event to occur, but it isn't the thing that initiates the causal chain.

When we look at the element or variable that actually initiates a causal chain of events, we call it the variable condition.  In the case of combustion it might be a lit match, a spark from electrical wires, or exploding gunpowder from a gun.  There can be many variable conditions.  

The point is you can't start a fire without a spark.  This gun's for hire.  You can't start a fire without a spark.  Even if we're just dancing in the dark.   Of course, you could also start a fire with several other things.  That's why we call it the variable condition.  But despite all the possible variable conditions, there must be oxygen present...even if we're just dancing in the dark.

As you might expect, when we consider the constant and the variable condition together, we call it the composite cause.   Basically, we are recognizing that for causal events there are some conditions that must be in place across all variable conditions and there are some other conditions that have a direct causal effect but that could be "switched out" with other conditions (like different the sources of a spark).

Separating constant conditions for variable conditions can be useful in establishing policy.  For example, with nutrition  if we know that eating a certain type of diet can cause weight loss (and we want to lose weight) we can vary our diet's composition or quantity of calories (variable conditions) in order to lose weight.  The constant condition (that we will eat) we can't do much about.  

Conversely, we can't control the variable conditions that cause the rain, but by buying umbrellas we can control the constant condition that rain causes us to get wet.  (Water is wet.  That's science!)

The Argument Structure of a General Causal Claim
Someone claims X causes Y.  But how do we evaluate it?  To begin we can use some of the tools we already acquired when we learned how to evaluate generalizations.  To do this we can think of general causal claims as a special case of a generalization (i.e., one about a causal relationship).

I'm sure you all recall that to evaluate a generalization we ask

 (1) is the sample representative?  That is, (a) is it large enough to be statistically significant (b) is it free of bias (i.e., does it incorporate all the relevant sub-groups included in the group you are generalizing about.; 
(2) does X in the sample group really have the property Y (ie., the property of causing event Y to occur).

Once we've moved beyond these general evaluations we can look at specific elements in a general causal claim.  To evaluate the claim we have to look at the implied (but in good science explicit) argument structure that supports the main claim which are actually an expansion of (2) into further aspects of evaluation.  

A general causal claim has 4 implied premises.  Each one serves as an element to scrutanize.

Premise 1:  X is correlated with Y.  This means that there is some sort of relationship between event/object X and event/object Y, but it's too early to say it's causal.   Maybe it's temporal, maybe it's statistical, or maybe it's some other kind of relationship.  

For example, early germ theorist Koch suggested that we can determine if a disease is caused by micro-organisms if those micro-organisms are found on sick bodies and not on healthy bodies.  There was a strong correlation but not a necessary causal relation because for some diseases people can be carriers but immune to the disease.  

In other words, micro-organisms might be a constant condition in a disease causing sickness, but there may be other important variable causes (like environment or genetics) we must consider before we can say the a particular diseases micro-organisms cause sickness.

Premise 2:  The correlation between X and Y is not due to chance.  As we saw with the Uzbek milk prices and the NYSE, sometimes events can occur together but not have a causal relation--the world is full of wacky statistical relations.  Also we are hard-wired to infer causation when one event happens prior to another.  But as you now know, this would be committing the post hoc ergo proptor hoc fallacy.

Premise 3:   The correlation between X and Y is not due to some mutual cause Z.  Suppose someone thinks that "muscle soreness (X) causes muscle growth (Y)."  But this would be mistaken because it's actually exercising the muscle (Z) that causes both events.

In social psychology there was in interesting reinterpretation of a study that demonstrates this principle.  An earlier study showed a strong correlation between overall level of happiness and degree of participation in a religious institution.  The conclusion was that participation in a religious institution causes happiness.  

However, a subsequent study showed that there was a 3rd element (sense of belonging to a close-knit community) that explained the apparent relationship between happiness and religion.  Religious organizations are often close-knit communities so it only appeared as though it was the religious element that cause a higher happiness appraisal.  It turns out that there is a more general explanation of which participation in a religious organization is an instance. 

Premise 4:  Y is not the cause of X.   This issue is often very difficult to disentangle   This is known as trying to figure out the direction of the arrow of causation--and sometimes it can point both ways.  For instance, some people say that drug use causes criminal behaviour.  But in a recent discussion I had with a retired parole officer, he insists that it's the other way around.  He says that youths with a predisposition toward criminal behavior end up taking drugs only after they've entered a life of crime.  I think you could plausibly argue the arrow can point both directions depending on the person or maybe even within the same person (i.e., feedback loop).  There's probably some legitimate research on this matter beyond my musings and the anecdotes of one officer, but this should suffice to illustrate the principle. 

Conclusion:  X causes Y.

Premise 2, 3, and 4 are all about ruling out alternative explanations.  As critical thinkers evaluating or producing a causal argument, we need to seriously consider the plausibility of these alternative explanations.  Recall earlier in the semester we looked briefly at Popperian falsificationism.   We can extend this idea to causation:  i.e., we can never completely confirm a causal relationship, we can only eliminate competing explanations.

With that in mind, the implied premises in a general causal claim provide us a systematic way to evaluate the claim in pieces so we don't overlook anything important.  In other words, when you evaluate a general causal claim, you should do so by laying out the implied structure of the argument for the claim and evaluating them in turn.