Research “impact” is an idea familiar to anyone working in high accountability academic environments. For example, the UK’s Research Excellence Framework (REF) requires academic departments to demonstrate, through case study methods, that research produced by some fraction of their researchers impacted things in real life outside of classrooms, academic conferences, and peer reviewed journals. This typically calls for demonstrations of changes in policy, regulation, or professional practice by governments, corporations, or NGOs. These demands for impact ask researchers to search for and claim impacts that may not exist or that even ought not exist. Impact may be the preservation of the status quo, or it may be further research that itself generates more direct changes in policy or practice. And more importantly, calls for measuring impact leave no room for what is plainly obvious as the most important impact of research: the research-informed education of millions of university graduates.
The call for generating “pathways to impact” has some value: it invites researchers to consider who (aside from themselves) might care about the findings of their research. But expecting researchers’ work to have near-immediate impact makes three problematic assumptions:
With respect to the first assumption, scientists know that some of our research only matters because it opens up pathways for new research or sets foundations for other investigations. This is a fundamental component of science. The impact of that research is that it enables other research, the latter of which might generate the kinds of impacts of concern to funders and governments. The demand for impact on non-research practices focuses scientific attention on short-term changes rather than work that might set the foundation for long-term research trajectories.
The social sciences seem particular concerned with or particularly subjected to the impact agenda. Yet other fields rarely hold themselves to this standard in the same way. Geneticists may be motivated by overarching goals (e.g., identifying causes of and cures for cancer) but recognize that their work may only be distantly related to any actual diagnostic procedure or treatment that is generated in the short-term. Physicists searching for gravitational waves or exoplanets are interested in fundamental questions about the workings of the universe, but they know that their work does not necessarily change how anyone lives their lives. Social scientists similarly engage in basic research of this kind - in service to overarching questions about societal and political well-being - but we seem uniquely concerned that even our basic research. I think it is valuable to remember that as scientists our work need not be held to a different standard than science in general.
The second assumption, I think, is more troubling because it presumes that important work is research that changes policy or practice. Yet if research is objectively executed, it must accept any outcome - that a possible relationship is positive, negative, or nonexistent. If research is only impactful if it suggests a change in practice, this is an implicit demand for confirmation bias (i.e., seeking evidence to support predetermined research conclusions) and for a form of publication bias that confounds clarity of inference and potential for impact. As we know, research practice is subject - at nearly every level - to publication biases, where researchers, editors, reviewers, and journalists favor claims of relationships or effects over claims of no relationship or effect. Meta-analytic reviews thus often show that published findings deviate considerably from what the true relationship, effect, or pattern seems to be. Bad research - research that overstates findings, claims results that do not exist, and so forth - thus precedes a longer-term and more cautious realization that relationships are often smaller and less interesting than early research suggests. Demands for impact feel, at times, earily similar to this process of publication bias: research is valuble if it is impactful, impactful research changes policy, so valuable research is that which finds evidence to support a change in policy, and we only later learn that the research we acted upon was an inaccurate portrayal of true relationships. Eager impact may make policy and practice worse.
The third and final assumption is particularly troubling, in part because it is so obvious. As a social scientist who has focused almost exclusively on the question of how to infer the effect(s) of some intervention, I have realized how challenging that kind of inference is even if the heavily controlled settings of a social science laboratory let alone in the field. The claim that we can quickly infer the impact of research throws all of this methodological knowledge away in order to engage - again - a confirmatory search for impact. Why we allow evidence of research impact to be held to a lower methodological standard than the research itself should trouble us more than it does.
Finally, I want to conclude by arguing that calls for demonstrated impact ask us to ignore the most direct, most obvious, and most wide-reaching impact of our research: that is, the impact of research on the students that we and other educators train everyday. If we believe - as we ought - that a higher education is a research-informed education, then research has direct, tangible impacts each and every day in thousands of classrooms all over the world. Students who read or learn secondhand about research develop knowledge and skills that will shape their lives and their work indefinitely. For example, if we want to change bureaucratic practice, what better way than training future bureaucrats in the latest research? If we want to improve polling, what better way than training future pollsters in the latest research? If we want to improve journalism, what better way than training future journalists in the latest research? This notion of education as fulfilling a public mission lies at the core of, for example, the American land-grant university system.
Higher education, as research-informed teaching and learning, generates enormous positive externalities through the impact that research has on students and that those students have on their friends, families, workplaces, and so forth. These impacts will almost necessarily exceed the impact of any social media, blogging, government advice, or other forms of knowledge exchange. If we lose sight of the core mission of universities as places that generate impact through teaching, then I fear we will distract researchers both from doing important, objective research and using that research to change the lives of students and thereby the world.
Except where noted, this website is licensed under a Creative Commons Attribution 4.0 International License. Views expressed are solely my own, not those of any current, past, or future employer.