My First Project and Most Recent Publication

16 Sep 2017

Academia is a marathon. While it can often feel like a sprint, with deadlines fast approaching and crowds running past you, ultimately everything takes more time than you first think. Case in point: my first ever graduate research project way back in 2010, which ultimately became a chapter in my dissertation which I finished in 2012, was finally accepted for publication last year in the Journal of Experimental Political Science and is online as of September 14, 2017. I never thought it would take eight years to see this research come to light, nor did a much more naive and much younger version of me imagine that it would go through a series of (sometimes painful) rejections. Yet the experience of carrying this project from a nascent idea in the back of the mind of an early graduate student through to publication in a peer-reviewed journal fundamentally transformed how I think about science and specifically open science. Here’s the story of this paper.

I started graduate school in 2008, fresh out of undergraduate at the University of Minnesota with the goal of studying something vaguely at the intersection of political theory, history, and political psychology. Actually, to give you a taste of my mindset, here’s an excerpt from my graduate admissions essay that somehow got me into Northwestern:

I have become interested in exploring the use of selective histories, political myths, and generational analogies as persuasive tools for priming, framing and agenda-setting in elite rhetoric and as frames in which voters construct attitudes toward current events. I hope to research how the relationship between elite-level and media interpretations of historical events both retell narratives of shared experience and create tension between different ideologies and frames.

To be clear: I never did anything on “selective histories, political myths, or generational analogies”. I did, however, remain interested in elite political communication and its psychological effects. But two years of coursework got me much more interested in the question of citizen competence and the measurement of political knowledge in particular. I spent much of 2009 and 2010 trying to think up a way to cleverly demolish the existing literature on political knowledge. To convey how long I thought about that, a visual of my version control system:

prospectus-files

Version control was not my strong suit. The first of those drafts proposed that I spend my dissertation trying to understand the following:

What is political knowledge and why does it matter? To answer this question I pose three specific research questions and experimental designs to address them: (1) on the dimensionality of political knowledge, (2) on the acquisition of political knowledge, and (3) on the implications of political knowledge for attitude formation and decision-making.

As it would turn out, only the second of those three questions ever became part of my dissertation - indeed, the question of how citizens acquire information is ultimately the only thing I actually studied in my dissertation research.

The first idea for the dissertation was simple: how does attitude strength change the way that people engage with political information? This seemed important because the literature on political communication at the time (and still to this day) adopts one of two paradigms: either researchers randomly expose participants to messages and see what happens or they let people choose information and study the choices per se. My idea was simple: what if we compared those two approaches and looked at the downstream effects, and given that attitude strength might change both what kind of information people acquire and how they respond to it, let’s throw that into the mix as well. What came out was a relatively simple survey experiment, which I was able to piggyback onto another survey that my advisor, Jamie Druckman, was already doing in the spring of 2011. It took two years to come up with this first dissertation project idea and about six months to design it, field it, and get the data processed.

Eventually, it made its way into my dissertation and helped me to conceptualize and design the subsequent two empirical chapters. Oddly enough, the other two papers came out first: one in the American Political Science Review in 2012 and the other in Public Opinion Quarterly two years later. “Chapter 1” as it lovingly remained known in my Dropbox folder lingered on. It failed several places: American Journal of Political Science, The Journal of Politics, and Political Behavior. I abandoned it for a while.

Then, the Journal of Experimental Political Science came around with a first issue published in 2014. I submitted it in late 2015, received an R&R, resubmitted in mid-2016, received a conditional acceptance, some stuff happened at the journal, and I received confirmation it would be published in August of 2017. It is now online. It’s been a long history.

What have I learned from this? First, everything takes time. Coming up with compelling research takes time. Data collection takes time. Data analysis takes time. Writing takes time. Peer review takes time. Rejection takes time. Recovery from rejection takes time. Responding to reviewers takes time. Typesetting takes time. Email takes time. Writing this blog post takes time. Everything takes time. It’s been eight years but that time has brought a publication I’m quite proud of.

But that’s a rather trivial thing to have learned. More importantly, I learned a few things about science and a few things about what I would like the process of science to look like:

  1. I learned that peer review processes are often harsh and heartbreaking. This was my first paper, my academic brainchild. People trashed it. In retrospect, often that trashing contained reasonable feedback but it was couched in language that sometimes dismissive and hurtful. On the other hand, sometimes the feedback was immensely helpful. Editors at both Political Behavior (shout out to Jeff Mondak) and JEPS (shout out to Eric Dickson) went out of their way to help me improve the paper and think about how to get it into publication. From those experiences, I learned that reviewing needs to be serious but it also needs to be kind. That’s why I started #BeReviewer1.

  2. I learned that I was disorganized. Once I’d written a few dozen drafts of this paper, anonymizing it and formatting for one journal, then deanonymizing it and reformatting it for the next, it became clear that my project management skills were a mess. I got interested in version control software and the basic idea of “getting yourself organized” as an essential part of any scientific project. I learned to start projects differently, using a standardized file structure and to never label files “version 1”, “version 2”, “final”, etc. I increasingly use git to track changes to projects and I learned that it’s important to nudge students and collaborators to the do the same.

  3. I learned that statistical methods - like the ones used in the paper - do not get used unless there’s software to make them happen. Around the time I was analyzing my data (in 2011, specifically), Brian Gaines and Jim Kuklinski published a paper in AJPS that proposed new techniques for analyzing the very kind of experiment I had designed. Eventually, my paper adopted the analytic methods they proposed; I bootstrapped some code to implement their estimators. But later, in a review process, an editor caught that there was a computational error in my results. Hacking together an estimator led me to calculate some key statistics incorrectly; they were forgiving and I fixed the error before it went to print. But this taught me that we need reliable tools anytime we use new methods. For that reason, I wrote a package for R, called GK2011 (after the original publication), that implements their methods in a consistent interface, supported by a complete replication of their analyses, and an open-source repository n GitHub that faciliated version control, contributions from other users, automated testing, and general transparency. I learned that doing applied reserach often requires this kind of software development work even though it is rarely rewarded.

  4. I learned about the need for analytic and methodological transparency. This project involved an innovative method (that reviewers initially found confusing) and analysis that was not simply out-of-the-box regression analysis. I put all of the code, data, materials, and the source code for the manuscript online in a persistent data repository long before the paper was ever published. I learned during the time that this paper was not yet an article that authors are rarely so transparent. Now I hold myself to a higher standard.

  5. I also learned that paywalls prevent researchers - especially those outside of the richest countries and even within rich countries outside of the richest universities - from accessing new research. A persistently available copy of this preprint will always be online and it won’t be hosted at a place like Elsevier-owned SSRN.

These are some lessons learned from a research marathon that started when I was a second-year graduate student and ended when I was a tenured professor. Research takes time, it’s better when done openly and carefully, and ultimately it’s worth the wait.



open science open source software political science r cran dataverse

Creative Commons License Except where noted, this website is licensed under a Creative Commons Attribution 4.0 International License. Views expressed are solely my own, not those of any current, past, or future employer.