# Introduction to Experiments # ## February 4 ## --- # Outline for today # 1. **Introductions** 2. Overview of course 3. Introduction to experiments 4. Preview of next week 5. In-class exercise --- # Introductions # - Name tags - Go-around - Who are you? - What do you want to do after your education? --- # Outline for today # 1. Introductions 2. **Overview of course** 3. Introduction to experiments 4. Preview of next week 5. In-class exercise --- .left-column[ ### Overview ] .right-column[ - Meet for 10 weeks - Small assignments on some weeks (presentations, etc.) - Synopsis presentations on:
Mar 25, Apr 8, Apr 15 - Individual meetings with me after April 15 - Light reading load ] --- .left-column[ ### Overview ### Exam ] .right-column[ - Propose an experimental study on a relevant topic from any area of political science - Topic is completely up to you - May be useful preparation for a masters thesis - Assume 400 pages of individual reading for the exam ] --- .left-column[ ### Overview ### Exam ] .right-column[ Contents: - Question, theory, and hypotheses - Design - Stimulus/treatment materials - All measures - Complete "protocol" - Planned statistical analysis - Accounts for possible data challenges - Discuss feasibility and ethics - Discuss external validity and contribution ] ??? ## Protocol: We'll talk about it later today Reading for next week --- .left-column[ ### Overview ### Exam ### Schedule ] .right-column[ Part 1 - 4.1 Introduction to Political Science Experiments (Feb 4) - 4.2 Concepts, Research Questions, and Hypotheses (Feb 11) - 4.3 Internal Validity and Experimental Design (Feb 18) - 4.4 Analysis of Experiments (Feb 25) - 4.5 Practical Issues and Challenges (Mar 4) ] --- .left-column[ ### Overview ### Exam ### Schedule ] .right-column[ Part 2 - 4.6 Examples: Laboratory Experiments (Mar 11) - 4.7 Examples: Field Experiments (Mar 18) - 4.8 Examples: Survey Experiments (Mar 25) **Presentations start on Mar 25** ] --- .left-column[ ### Overview ### Exam ### Schedule ] .right-column[ Part 3 - *No class (Apr 1)* - 4.9 External Validity (Apr 8) - 4.10 Effect Sizes, Meta-Analysis, Decision Making (Apr 15) **Presentations on Apr 8 and Apr 15** ] --- # Outline for today # 1. Introductions 2. Overview of course 3. **Introduction to experiments** 4. Preview of next week 5. In-class exercise --- name: History # History of experiments # - American Political Science Association president A. Lawrence Lowell: > `We are limited by the impossibility of experiment. Politics is an observational, not an experimental science..." - Experiments prominent in psychology, natural sciences - King, Keohane, and Verba (1994) only mentions experiments once - Since ~2000, "credibility revolution" ??? Holland (1986) is the foundational article for causal inference He was a student of Donald Rubin at Harvard, to whom contemporary statistical thinking about causality is typically attributed Rubin attributes everything to a Polish statistician named Jerzy Neyman who primarily worked at UC-Berkeley You can read Neyman's original paper later in the course (Week 4); optional --- # Uses of Experiments # Alvin Roth, Stanford, 2012 Nobel Prize winner - Searching for facts - Speaking to theorists - Whispering in the ears of princes ??? ## Search for facts: Iyengar, Peters, and Kinder (1982) Standing wisdom was that media had no effect on the public Used experiments to show direct, and substantial effects in a laboratory setting Read in week 6 Clarke et al. (1999) vs. Inglehart (various) Inglehart's findings are an artifact of measurement, not a decline in postmaterial values ## Speaking to theorists: Fiorina and Plott (1978) Seminal article that tests a formal model of majority rule decision making The formal theory model better predicted outcomes than rival explanations Experiments by Kahneman, Tversky, and collaborators Read one for today ## Whispering to princes: Gerber and Green (2000) Should political campaigns expend effort to mobilize citizens to vote? Some techniques work and others do not --- # Types of Experiments # - Lab: treat in a controlled research environment - Field: treatment occurs in course of everyday life - Survey: treatment occurs outside of the control of the research --- # Causality # - Correlation ??? ## Correlation We often think about causality in terms of correlation This isn't quite right --- class: middle, center ![http://imgs.xkcd.com/comics/correlation.png](http://imgs.xkcd.com/comics/correlation.png) --- # Causality # - Correlation - Physical causality - Philsophical perspectives ??? ## We do better with physical causality If I hold my book in the air and then let go, the book drops I caused the book to drop There was a causal effect Causality in other contexts is much harder to see Lot's of theorists have tried to define causality --- # Hume # Three tenents 1. Spatial/temporal contiguity 2. Temporal succession 3. Constant conjunction ??? Holland (p.950) Deterministic view of causality 1. A cause on X cannot produce an effect on Y 2. Effect follows from cause 3. Causal determinism --- # Four (or five) principles of causality # A more modern take involves 4-5 principles: - Relationship - Direction (temporality) - Nonconfounding - Mechanism - Appropriate level of analysis --- # Mill's Methods # .left-column[ #### Agreement ] .right-column[ > If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon. ] --- # Mill's Methods # .left-column[ ### Agreement ### Difference ] .right-column[ > If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an necessary part of the cause, of the phenomenon. ] ??? For the purposes of our course, Mill's method of difference is our definition of a causal effect --- # Mill's Methods # .left-column[ ### Agreement ### Difference ### Agree & Diff ] .right-column[ > If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon. ] --- # Mill's Methods # .left-column[ ### Agreement ### Difference ### Agree & Diff ### Residue ] .right-column[ > Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. ] --- # Mill's Methods # .left-column[ ### Agreement ### Difference ### Agree & Diff ### Residue ### Concomitant variations ] .right-column[ > Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation. ] ??? This is just correlation!!! --- # Causal Terminology # - Unit: A physical object at a particular point in time - Treatment: An intervention, whose effects we wish to assess relative to some other (non-)intervention - Potential outcomes: The outcome for each unit that we would observe if that unit received each treatment - Multiple potential outcomes for each unit, but we only observe one of them - Causal effect: The comparisons between the unit-level potential outcomes under each intervention - Average causal effect ??? Both Holland and Imai et al. use the same terminology, but slightly different notation --- # Potential Outcomes # Causal inference is about estimating **what would have happened** in a counterfactual reality --- # Potential Outcomes # Causal inference is about estimating **what would have happened** in a counterfactual reality Has anyone read or seen *A Christmas Carol*? ??? 1843 novel by Charles Dickens In Danish: *Et juleeventyr* Ebenezer Scrooge is shown his own future by the "Ghost of Christmas Yet to Come" Has the choice to stay on current path (in the control condition) or change his ways (take a different treatment) The causal effects of his actions are seen in the differences between the counterfactuals Experiments try to do the same with real phenomena --- # Fundamental problem of causal inference # But we can only observe any given unit in one reality! ??? Think of having a headache You can either do nothing or you can take some paracetemol You want to do the one that reduces your headache, but you can only do one or the other We only see the outcome for whichever action you take, not the counterfactual outcome If you take the paracetemol and your headache goes away, can we know if it "worked"? --- # Scientific solution # - Used in physical sciences (e.g., agriculture) - Two strategies: - Take the same unit and it expose it to both treatments at different points in time - Take two similar units and expose to the two treatments at the same - Requires constant effect assumption: - The past does not matter - Also requires homogeneity of units assumption - Units are identical (or differences are irrelevant) --- # Statistical solution # - Random assignment - Observation of average causal effects --- # Causal inference in political science # Traditional observational research approach: - The observation of one or more units. ??? Doesn't really satisfy the philosophical requirements of causation No random assignment --- # Causal inference in political science # Traditional observational research approach: - The observation of one or more units. Experimental approach: - Observation plus intervention --- # "Perfect Doctor" # True potential outcomes (unobservable in reality) | Unit | Y(0) | Y(1) | | ---- | ---- | ---- | | 1 | 13 | 14 | | 2 | 6 | 0 | | 3 | 4 | 1 | | 4 | 5 | 2 | | 5 | 6 | 3 | | 6 | 6 | 1 | | 7 | 8 | 10 | | 8 | 8 | 9 | | *Mean* | *7* | *5* | ??? Pretend we have life expectancy data for 8 patients We get "God's data", which shows the treatment each patient would have under treatment and control Clearly, the control is better. On average, patients live longer in the control condition. --- # "Perfect Doctor" # How observational data can mislead | Unit | Y(0) | Y(1) | | ---- | ---- | ---- | | 1 | ? | 14 | | 2 | 6 | ? | | 3 | 4 | ? | | 4 | 5 | ? | | 5 | 6 | ? | | 6 | 6 | ? | | 7 | ? | 10 | | 8 | ? | 9 | | *Mean* | *5.4* | *11* | ??? Now we only get to see one potential outcome per patient But our data are not from an experiment A "perfect doctor" has assignmented each patient to the best treatment for them Now treatment looks much better than control, even though on average it is worse (This is basic selection bias) Look ahead to Shadish, Cook, and Campbell (Ch.8, 246--257) for more on randomization --- # Definition of an experiment # - Minimum definition > The observation of one or more units after an intervention in a controlled setting. - More complete definition > The observation of units after, and possibly before, a randomly assigned intervention in a controlled setting, which tests one or more precise causal expectations. --- # Elements an experiment # 1. Physical intervention 2. Control 3. Treatment assignment independent of potential outcomes 4. Treatment assignment independent of all confounding variables ??? 1. Observational research looks out at the work; experimental work intervenes 2. Experimental research requires control - Mill's method of difference - Everything must be the same for every unit in an experiment, except for the treatment 3. No selection bias - That being in one condition or the other is uncorrelated with the values of the dependent variable 4. No confounding; via randomization - Being in one condition or another is also uncorrelated with any other independent variable that is related to the dependent variable - In expectation, treatment and control groups are identical Observational studies guarantee none of these 1. no intervention 2. little control 3. Lots of selection bias 4. Lots of confounding Differences between this and quasi-experiments 1. Intervention enacted by the researcher 2. Some control 3. Some selection bias 4. Some confounding --- # Outline for today # 1. Introductions 2. Overview of course 3. Introduction to experiments 4. **Preview of next week** 5. In-class exercise --- # Next week: Readings # - Shadish, Cook, and Campbell on research design - Chapter from Gerring (I will send this to you via email) - A short article by me explaining what goes into an experimental protocol - Gives you a sense of details for the exam - An example experiment by Druckman and Nelson ??? **Was everyone able to find the textbook?** --- # Next week: Assignment # Complete a summary of the experiment by Druckman and Nelson --- # Outline for today # 1. Introductions 2. Overview of course 3. Introduction to experiments 4. Preview of next week 5. **In-class exercise** --- # In-class exercise # How do we read experimental literature? - Research question - Theory/hypotheses - Variables - Design - Data collection/protocol - Analysis - Results/findings ??? Pass out worksheet --- # Kahneman and Tversky # Try to summarize Kahneman and Tversky in this way - Research question - Theory/hypotheses - Variables - Design - Data collection/protocol - Analysis - Results/findings