“Most current published research findings are false,” we are told, in John Ioannidis’s now-famous article in PLoS Medicine. Questions about study replication, about the reproducibility and transparency of published findings, and about scientific fraud have never been more salient. I am starting to believe that little, if anything, we claim to know about the social world generalizes terribly well.
Yet, with such immense uncertainties about human knowledge as background, there are also ever-louder calls for public relevance and policy impact. The push for policy impact is particularly strong in the United Kingdom, where the Research Excellence Framework not only requires researchers show efforts at public engagement but also demonstrate actual changes in public policy. Funding applications to Research Councils UK require “pathways to impact” statements that document how research findings will be translated into action, often in direct collaboration with research end-users.
The call for research impact makes sense (at some level) if the mission of publicly funded universities is to be a source of knowledge for policymakers. At the same time, however, the demand for impact - particularly in the short term (i.e., at the level of individual studies and grants) - massively increases the risks associated with following research. Asking researchers to believe confidently enough in their findings to publish them is one thing; asking them to believe that the implications of findings should be policy sets the bar too high, opening up new risks of poorly informed policymaking.
Tom Pepinsky, Brendhan Nyhan, Ingo Rohlfing, and I had a little conversation about this on Twitter in the aftermath of the Green/LaCour scandal. I argued that we should never take any action on the basis of a single study. The Green and LaCour claims seem to be a perfect example of that: the “Yes” campaign in the Irish referendum on same-sex marriage adopted LaCour’s intervention, informed by the now-retracted study, as the best route to persuading Irish voters. Such high-profile use of a single study would be the makings of a perfect REF Impact Case Study, if only the study weren’t completely fraudulent. Despite the “Yes” camp eventually prevailing in Ireland, Kieran Hely points out that it is impossible to know whether the personal conversation method helped or was a pointless waste of resources.
But the high-risk nature of demands for research impact are not confined to cases of fraud. Whether we are discussing efficacy of medical treatments, Get-Out-The-Vote techniques, development economics, persuasion, health communication, or any other social scientific domain, I stand by my argument that no single study alone should inform policy. Rather than reargue this point at length, I’ll let Macartan Humphreys say exactly what I want to say through a discussion of the claimed efficacy of deworming efforts:
The critical point though is that risks of error makes dependency on one paper even more risky. When inferences for policy are drawn from many independent studies, there is some hope that these sorts of errors will wash out. So, another reason to invest seriously in more field replication. Placing so much weight on one paper is also bad for scholarship.
The reasons for relying on a diverse set of studies are several and familiar. Results might be localized, rather than general, with respect to units, treatments, outcomes, or context. Methods or protocols might be flawed. Results might be misreported or data misanalyzed. Uncertainty of estimates might be considerable. We all know this. It’s the reason that replication and transparency are so high on the academic agenda. It’s the reason the U.S. Department of Education has an archive of “What Works” that contains more than 10,000 studies and the reason the Cochrane Collaboration reviews are so important.
As a result, it is simply not appropriate to demand or even wish for single studies to shape public policy. Even as someone who is relatively risk acceptant (see: me writing this blog pre-tenure), I do not think the risks of short-term policy impact are worth it. When the demand for such impact is high, it is going to mean researchers encouraging ideas, policies, and interventions with a weak evidentiary basis. I would rather see uncertainty reduced within the academic literature first - where the consequences for real people, governments, companies, and NGOs is low. If researchers are to be credible policy advisors, they need to have strong empirical support for their claims and there is almost no circumstance in which a single study provides that support.
Except where noted, this website is licensed under a Creative Commons Attribution 4.0 International License.