Thomas J. Leeper > Teaching > Survey Experiments Course > Home

“Survey Experiments in Practice” Short Course

This page holds materials for a short course called “Survey Experiments in Practice”, which I teach from time to time in various places. This course was first taught at the RECSM (Universitat Pompeu Fabra) Barcelona Summer School in Survey Methodology. Materials for the 2018 version of that course are below.

Overview

Survey experiments have emerged as one of the most powerful methodological tools in the social sciences. By combining experimental design that provides clear causal inference with the flexibility of the survey context as a site for behavioral research, survey experiments can be used in almost any field to study almost any question. Conducting survey experiments can appear fairly simple but doing them well is hard.

This course will use published examples of experimental research to demonstrate a variety of ways to leverage survey experiments for testing social science theories. The course will teach participants how to use different survey experimental designs and how to address challenges related to sampling, survey mode, ethics, effect heterogeneity, and more. Students leave the course with a thorough understanding of how survey experiments can provide useful causal inferences, knowledge of how to design and analyze simple and complex experiments, and the ability to evaluate experimental research and apply these methods in their own research.


UPF-RECSM Seminar, 2018

The next full iteration of this course will be taught at the Barcelona Summer School in Survey Methodology at Universitat Pompeu Fabra in Barcelona. The short course involves two 4-hour sessions, scheduled to run from 14:00-18:00 each afternoon June 28-29, 2018. Slides, readings, and materials for each session of the course are available here.

Syllabus and Schedule

An outline of the course is given below. All of the readings are available here.

Session 1: Survey Experiments in Context (June 28, 14:00-16:00)

The first session will provide an overview of the course, discuss the history of survey experiments and experiments in general, and provide a conceptual and notational framework for design, analyzing, and discussing experiments.

Class Schedule

  • 14:00-14:30 - Introductions and Course Overview
  • 14:30-15:00 - History of the Survey Experiment (and Experiments, generally)
  • 15:00-16:50 - Potential Outcomes Framework of Causality

Readings

  • Required: Holland, P. W. 1986. “Statistics and Causal Inference.” Journal of the American Statistical Association 81: 945-960.
  • Druckman, J. N., Green, D. P., Kuklinski, J. H., and Lupia, A. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100: 627-635.
  • Kuklinski, J. H. and Hurley, N. L. 1994. “On Hearing and Interpreting Political Messages: A Cautionary Tale of Citizen Cue-Taking” The Journal of Politics 56: 729-751.

Session 2: Examples and Paradigms (June 28, 16:00-18:00)

While the first session demonstrated the advantages of experimentation as a research design, designing experiments can be challenging without a solid grounding in a relevant theoretical literature. This session will discuss common paradigms for survey experimental research and discuss how to design experiments to test social science theories.

Class Schedule

  • 16:05-16:30 - Translating Theories into Experiments
  • 16:30-18:00 - Paradigms (Question Wording, Vignettes, Sensitive items, etc.)

Readings

  • Schuldt, J. P., Konrath, S. H., and Schwarz, N. 2011. “‘Global Warming’ or ‘Climate Change’?: Whether the Planet is Warming Depends on Question Wording.” Public Opinion Quarterly 75: 115-124.
  • Banerjee, A., Green, D. P., McManus, J., and Pande, R. (2014). “Are poor voters indifferent to whether elected leaders are criminal or corrupt? A vignette experiment in rural India.” Political Communication 31(3): 391-407.
  • Glynn, A. N. 2013. “What Can We Learn with Statistical Truth Serum?: Design and Analysis of the List Experiment.” Public Opinion Quarterly 77: 159-172.
  • Albertson, B. L. and Lawrence, A. 2009. “After the Credits Roll: The Long-Term Effects of Educational Television on Public Knowledge and Attitudes.” American Politics Research 37: 275-300.
  • Hainmueller, J., and Hopkins, D. J. (2015). The hidden American immigration consensus: A conjoint analysis of attitudes toward immigrants. American Journal of Political Science, 59(3): 529-548.

Session 3: Hands-On Practice Session (June 29, 14:00-16:00)

This session will involve the application of course material to development of survey experiments relevant to students’ own research.

Class Schedule

  • 14:00-14:45 - Students develop experimental designs in small groups
  • 14:45-15:45 - Presentations
  • 15:45-16:00 - Large group discussion of experimental design

Readings

The readings for this portion consist of applied examples from Time-Sharing Experiments for the Social Sciences (TESS)

Session 4: Practical Issues (June 29, 16:00-18:00)

This session will cover a number of remaining issues, especially related to the practical implementation of survey experiments.

Class Schedule

  • 16:00-16:30 - External Validity and the SUTO Framework
  • 16:30-17:00 - Lingering Issues (Attention, Satisficing, Self-Selection, Ethics)
  • 17:00-17:45 - Handling of “Broken Experiments”
  • 17:45-18:00 - Summary and Conclusion

Readings

  • Gaines, B. J., Kuklinski, J. H., and Quirk, P. J. 2007. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15: 1-20.
  • Clifford, S. and Jerit, J. 2015. “Do Attempts to Improve Respondent Attention Increase Social Desirability Bias?” Public Opinion Quarterly 79: 790-802.
  • Miratrix, L.W., Sekhon, J.S., Theodoridis, A.G., and Campus, L.F. 2018. “Worth Weighting? How to Think About and Use Weights in Survey Experiments.” Political Analysis: in press.
  • Bolsen, T. 2013. “A Light Bulb Goes On: Norms, Rhetoric, and Actions for the Public Good.” Political Behavior 35: 1-20.
  • Hainmueller, J., Hangartner, D., and Yamamoto, T. 2015. “Validating Vignette and Conjoint Survey Experiments Against Real-World Behavior.” Proceedings of the National Academy of Sciences: In press.
  • Druckman, J. N. and Leeper, T. J. 2012. “Learning More from Political Communication Experiments: Pretreatment and Its Effects.” American Journal of Political Science 56: 875-896.
  • Hertwig, R. and Ortmann, A. 2008. “Deception in Experiments: Revisiting the Arguments in Its Defense.” Ethics & Behavior 18: 59-92.
  • Mullinix, K. J., Leeper, T. J., Druckman, J. N., and Freese, J. 2015. “The Generalizability of Survey Experiments.” Journal of Experimental Political Science.
  • Leeper, T. J. “The Role of Media Choice and Media Effects in Political Knowledge Gaps.” Working paper, London School of Economics and Political Science.

Example Exercises


Further Reading

Though not assigned for the course, the following texts may serve as useful background reading or places for further inspiration in the design and analysis of survey experiments.

Books

  • Gerber, A.S. and Green, D.P. 2012. Field Experiments: Design, Analysis, and Interpretation. New York: W.W. Norton.
  • Groves, R.M., et al. 2009. Survey Methodology. Wiley-Interscience.
  • Morgan, S.L. and Winship, C. 2015. Counterfactuals and Causal Inference. 2nd Edition. New York: Cambridge.
  • Mutz, D.C. 2011. Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.
  • Schuman, H. and Presser, S. 1981. Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. SAGE Publications.
  • Glennerster R. and Takavarasha, K. 2013. Running Randomized Evaluations: A Practical Guide. Princeton, NJ: Princeton.
  • Auspurg, K. and Hinz, T. 2015. Factorial Survey Experiments. SAGE Publications.

Survey, Experimental, and Survey-Experimental Methodology

Sensitive Items

  • Tourangeau, R. and Smith, T. W. 1996. “Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context.” Public Opinion Quarterly 60: 275-304.
  • Blair, G. and Imai, K. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20: 47-77.
  • Kreuter, F., Presser, S., and Tourangeau, R. 2009. “Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity.” Public Opinion Quarterly 72: 847-865.

Mediation

  • Jamieson, J. P. and Harkins, S. G. 2011. “The Intervening Task Method: Implications for Measuring Mediation.” Personality & Social Psychology Bulletin 37: 652-661.
  • Green, D. P., Ha, S. E., and Bullock, J. G. 2009. “Enough Already about ‘Black Box’ Experiments: Studying Mediation is More Difficult than Most Scholars Suppose.” The ANNALS of the American Academy of Political and Social Science 628: 200-208.
  • Imai, K., Keele, L. Tingley, D., and Yamamoto, T. 2011. “Unpacking the Black Box: Learning about Causal Mechanisms from Experimental and Observational Studies.” American Political Science Review 105(4): 765-789.

Sampling and Representativeness

  • Wang, W., Rothschild, D., Goel, S., and Gelman, A. 2015. “Forecasting Elections with Non-representative Polls.” International Journal of Forecasting: In press.
  • Chandler, J., Paolacci, G., Peer, E., Mueller, P., and Ratliff, K. A. 2015. “Using Nonnaive Participants Can Reduce Effect Sizes.” Psychological Science: In press.
  • Banducci, S. and Stevens, D. 2015. “Surveys in Context: How Timing in the Electoral Cycle Influences Response Propensity and Satisficing.” Public Opinion Quarterly 79: 214-243.

Factorial Experiments

  • Hainmueller, J., Hopkins, D. J., and Yamamoto, T. 2014. “Causal Inference in Conjoint Analysis: Understanding Multi-Dimensional Choices via Stated Preference Experiments.” Political Analysis 22: 1-30.

Treatment Preferences

  • Hovland, C. I. 1959. “Reconciling Conflicting Results Derived from Experimental and Survey Studies of Attitude Change.” American Psychologist 14: 8-17.
  • Leeper, T. J. 2017. ““How Does Treatment Self-Selection Affect Inferences About Political Communication?” Journal of Experimental Political Science 4(1): 21–33.

Ethics

  • Sterling, T. D., Rosenbaum, W. L., and Weinkam, J. 1995. “Publication Decisions Revisited: The Effect of the Outcome of Statistical Tests on the Decision to Publish and Vice Versa.” The American Statistician 49: 108-112.
  • Franco, A., Malhotra, N., and Simonovits, G. 2015. “Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results.” Political Analysis 23: 306-312.

General Statistics

  • Gelman, A., and Stern, H. 2006. “The Difference Between ‘Significant’ and ‘Not Significant’ is Not Itself Statistically Significant.” The American Statistician 60(4): 328-331.

Instructor Bio

Thomas J. Leeper is an Assistant Professor in Political Behaviour in the Department of Government at the London School of Economics and Political Science. He studies public opinion dynamics using survey and experimental methods, with a focus on citizens’ information acquisition, elite issue framing, and party endorsements within the United States and Western Europe. His research has been published in leading journals, including American Political Science Review, American Journal of Political Science, Public Opinion Quarterly, and Political Psychology among others.


Host a workshop/course/seminar!

Feel free to contact me if you would like me to put on workshop about survey experiments. I’m willing to travel and can teach this material in workshops varying from 1-2 hours to several days!

Past Workshops

Information on previous workshops are available at the following links: