Tuesday, March 25, 2014

Lecture 10B: Polling


Generalizations Con't
Review general structure, representativeness, sample size, sample bias, selection bias, measurement problems.

Criticizing Premise 2:  Measurement problems: Are we measuring the property that we think we're measuring?

E.g.  Religion and Happiness
E.g.  Trolly Dilemma  (a) interpretations of deontologists  (b) interpretations of utilitarians
2 interpretations
E.g. The voting machines are rigged!!!1!!1!!!





Polling
Definition:  Polling is a generalization about a specific population's beliefs or attitudes.



Polling requires that we know 3 things:

(a) The sample: Who is in the sample (representativeness) and how big was that sample.
(b) The (target) population: What is the group I'm trying to make the generalization about.
(c) The property in question: What is that belief, attitude, or value I'm trying to attribute to the population.

The underlying formal structure of a polling argument is the same as that of a generalization.
(P1)  S is a representative sample of Xs.
(P2)  Proportion 1 of Xs in S have property Y (have attitude/belief Y).
(C)   Proportion 2 of Xs have the property Y (have attitude/belief Y).



Measurement Problems
1.  How you ask the question and how the audience interprets the question can affect the results of a poll.

Younger Democrats overwhelmingly supported ObamaCare, with 68 percent approval, while only 26 percent of independents and 5 percent of Republicans supported it. Language also affected the numbers, particularly among Democrats and independents. When asked if they supported the "Affordable Care Act," as opposed to "ObamaCare," 81 percent of Democrats approved, while 34 percent of independents and 7 percent of Republicans said they were in favor.
Harvard Poll
 
2.  There can be a difference in how the results of a poll are reported and the quality or property that is actually being measured by the poll.  Also, how are the target groups defined?
http://online.wsj.com/article/PR-CO-20131023-913504.html
(a) P1 L4
(b) About this Research L2.


3.  Alt-med usage poll
The conclusion, the list, Sec: Interview Para 4, 5, 6.

4.  Poll on evolution

5.





Clinton Voters
Trump Voters


Another problem with poll data (People will give answers even though they have no clue what they're talking about)

6.  Loaded Questions and Setting the Tone

E.g., Would you agree that Obama is doing a poor job?
E.g.,  Wouldn't you agree that Critical Thinking 102 is an important course?  (Newspaper Headline: Students Agree:  Ami's Critical Thinking Course Is the Most Important Course in the World)
E.g.,  Over the last 2 years the economy has crashed and our national security is more vulnerable than ever.  How would you rate G. W. Bush's performance as president?

7.  Measurement problems because of language problems (e.g., vagueness, ambiguity)
E.g., Do you drink alcohol frequently, moderately, hardly at all, never?

8.  Mathematical problems/measurement problems: Teacher Evaluation
A. What grade do you expect in this class?
B. What grade do you deserve in this class?
50% of the variance in student overall evaluations could be explained by the difference between those two quantities. If they were nearly equal, the evaluations tended to be good. If A was much less than B, they tended to be bad. And, of course, as we increasingly succumb to grade inflation, our teacher evaluations are improving.

Tip for Measurement Problems:  Track down the original poll being quoted and read the "Interview" Section to find out what the actual questions were.

People can be freakin' liars: (Jimmy Kimmel super Tuesday California)
<div id="fb-root"></div><script>(function(d, s, id) {  var js, fjs = d.getElementsByTagName(s)[0];  if (d.getElementById(id)) return;  js = d.createElement(s); js.id = id;  js.src = "//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.3";  fjs.parentNode.insertBefore(js, fjs);}(document, 'script', 'facebook-jssdk'));</script><div class="fb-video" data-allowfullscreen="1" data-href="/JimmyKimmelLive/videos/vb.195974498373/10153939666333374/?type=3"><div class="fb-xfbml-parse-ignore"><blockquote cite="https://www.facebook.com/JimmyKimmelLive/videos/10153939666333374/"><a href="https://www.facebook.com/JimmyKimmelLive/videos/10153939666333374/">Lie Witness News - Super Tuesday Edition</a><p>We asked people in Hollywood if they voted for #SuperTuesday - they lied. #LieWitnessNews</p>Posted by <a href="https://www.facebook.com/JimmyKimmelLive/">Jimmy Kimmel Live</a> on Wednesday, March 2, 2016</blockquote></div></div>


Effect of Timing: (Availability Bias, Affect Bias)
Poll on civil liberties and security 2011
Harris Poll found that 68% of Americans supported a national ID system. A study conducted in November 2001 for the Washington Post found that only 44% of Americans supported national ID. A poll released in March 2002 by the Gartner Group found that 26% of Americans favored a national ID, and that 41% opposed the idea. Popular support for other surveillance technologies has declined as well.

Consider polls on gun control post-major public shooting, major environmental catastrophe, major natural disaster, asking students about the stressfulness of school during exam periods, etc...

Tip:  Look at the date the poll was conducted vs the date it was published.

Margin of Error
Poll says the We Love America More than You Party has 49% of the vote while the American Freedom Party has 44% of the vote.  There is a margin of error of +/- 3%.  Supposing the data perfectly reflects how people will actually vote, is the American Freedom Party guaranteed to win?

Tip:  When results are close, find the margin of error.

Sample Bias (Criticizing Premise 1)
Sample bias is when the characteristics of members of sample aren't (proportionally) representative of the relevant characteristics of the group.
https://twitter.com/realDonaldTrump/status/788954581346779136/photo/1?ref_src=twsrc%5Etfw

Stratified Random Sampling helps to avoid sample bias.  Make sure all relevant subgroups are represented through random sampling + weighting of subgroups. 

Question: How would you stratify a sample for a survey of American opinions?

Selection Bias  (Criticizing Premise 1)
Sample bias + sample size.
When the way a sample is chosen causes sample bias (often inadvertently) this is called selection bias. If possible, track down the original poll being quoted read the methods and interview section.

E.g., Alt-med study with non-English speakers
E.g., Doctors on aiding executions (response rates, state policy)
E.g., Internet surveys, landline surveys
E.g., Literary Digest predicts 370 to 161 for Landon vs Roosevelt.
E.g., Right-wing polls re: first debate vs Prediction markets

Common causes of selection bias:  ideological polling group, self-selection, pre-screening of trial participants, discounting trial subjects/tests that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the study area.

How to avoid sample bias:  Make sure all relevant subgroups are represented through random sampling + weighting.

Tip:  If possible, track down the original poll being quoted read the methods and interview section.

How Were the Results Reported in the Media/Headline vs What Does the Actual Poll/Article Say?

E.g.  "More than 40% of US Physicians would Aid Executions"  Study




Homework 10B
P. 240 Ex 9B All

No comments:

Post a Comment