Science, Social Media, and the Boundaries of Ethical Experimentation

29 Jun 2014

Facebook has been taking a lot of heat this week for a study its researchers published in Proceedings of the National Academy of Sciences. The study is summarized by its authors as follows:

We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

The response from the media world has not been great. The lede at Slate read: “It intentionally manipulated users’ emotions without their knowledge.” The Chicago Tribune’s Scott Kleinberg was apparently shocked to learn that internet companies experiment: “This might be the reason Facebook doesn’t have a dislike button. The social network has been conducting secret experiments on us. Really.”

AV Club offered a softer view: “It’s a charming reminder that Facebook isn’t just the place you go to see pictures of your friends’ kids or your racist uncle’s latest rant against the government, it’s also an exciting research lab, with all of us as potential test subjects.” In short, Facebook conducted scientific research that was deemed sufficiently interesting to get it published in one of the world’s top academic journals.

I think it’s fantastic that the study has prompted a wide discussion of research ethics among academics and journalists working in both technology and science. And, I think it’s understandable that many people are experiencing a visceral negative reaction to the study. The idea of a company that doesn’t have a great track record of privacy secretly experimenting with its users emotions sounds bad.

But, if we step back from our gut reactions to this study, it’s important to realize that these kinds of studies are exceptionally common (we just don’t hear about them), they’re not illegal, and I’m not completely convinced that this study was particularly unethical. This post walks through some of the issues. For a good overview of the issues at stake, The Atlantic also has a pretty good overview.

Publication

The only reason we know about this study is because it was published and, probably more importantly, that it was published in one of the world’s most widely read and cited publication outlets, which has an impact factor of about 10 and according to Google has an h-index of 217. I’ve gotten the sense that much of peoples’ ethical concerns with this study ultimately pivot on the fact that this research was published. Publishing it raises questions about informed consent, randomization, privacy, deception, and debriefing of research subjects, which I discuss below. Without publication - that is, if this research were simply used internally - it’s unclear if it would (if we were to hear about it hypothetically) cause us much concern.

Indeed, we know that companies study their clients all the time. Companies change their advertising campaigns to try to get customers to buy new products. Fast food restaurants pilot new menu items in localized markets to see if they increase or otherwise change purchasing behavior. Credit card issuers increase credit limits and decrease credit ratings to try to change the amount of debt we accrue. The vitamin industry invents new untested supplements to increase sales with previously issued products fall flat. Presumably all are researching, in some way, the effects of these changes in their behavior on the behavior of their customers. And they should, otherwise such efforts would be a waste of their time.

Thus, there is nothing out of the ordinary here in terms of business practice. Facebook’s study was routine, presumably meant in part to sell a different and hopefully better product (i.e., a modified version of the Facebook feed) to its users. Publishing the results means we have to consider other principally ethical (as opposed to legal) issues that might make us question whether this research should have been published, and indeed whether it should have been conducted at all. But, as the final point below will show, by using Facebook, we agree to allow the company to do almost anything with our data without our explicit permission. Publishing a research report easily falls within the boundaries of that agreement.

The Dependent Variable

I suspect a big part of our visceral reactions to this relates to the dependent variable. Facebook studied our emotions. Emotions are powerful things. They shape our cognition and behavior. They are the labels we apply to powerful sensations, like pain, anger, sadness, excitement, and glee. It might seem wrong for a company to play with our emotions. But isn’t this what marketing firms do all the time? Advertisements are meant to make us feel sad about our appearance, to make us excited about new products that can solve our problems, or to make us envision the happiness we’ll feel sipping margaritas on a Caribbean cruise. Companies play with our emotions all the time.

But isn’t this different? Because Facebook used our friends to play with our emotions. But how is this any different from real life? Facebook didn’t - as advertisers do - make up content to influence our emotions. They didn’t - as political campaigns do - use emotions to get us to vote differently from how we otherwise would. Instead, they subtly manipulated what subset of the content our friends produce we would see. This is something they do all the time, as they try to figure out the “best” algorithm to generate the Facebook feed. And our friends influence our emotions already. When your friend gets angry at you, you might feel sad. When your friend gets married or has a baby, you might feel happy. These are events you already experience and Facebook simply highlighted or de-emphasized some of those experiences, from people we already know. Linda Skitka summarized this aptly: “FB manipulates what it feeds us all the time, so this study fits the ‘not different from people’s everyday experience’ criterion.”

There is a question about whether conducting a study that might make people sad presents “risks”. Risks are a formal element of behavioral research ethics, and something I return to below in my discussion of oversight and informed consent.

Manipulation

The other big issue here has to do with “manipulation”. The Slate article quoted above emphasized that Facebook “manipulated” its users. But, just like exposure to advertising, Facebook manipulates our lives all the time. And we should know that. By signing up for Facebook we sign up for a mediated interaction with our connections. If we wanted to see our friends directly, we’d call, email, text, or chat them, or see them in person. Facebook filters our experience of our social life, so it’s manipulated by definition. And the way that manipulation works changes. We know that too. When the Facebook feed was originally introduced, many were outraged that we wouldn’t have to (as in the glorious days of old, circa 2005) have to visit our friends’ individual walls. Facebook then keeps changing the algorithm that creates the feed.

This is no different from how Google changes its algorithm to put “the best” results first. Google mines our email, browser, and search history to create those customized results. It’s rather creepy, but that creepiness is part of the service we sign up for. If we sign up to use Google or Facebook, manipulation is part of the product we (do not paying any money to) buy.

There is also a subtle but important feature of the manipulation. The manipulation did not remove content from the Feed, it simply made particular posts probabilistically less likely to show up. Content was still shown, but at different probabilities than before. And, no one was exposed to more negative or more positive content than they otherwise would have, instead they were shown less of one or the other. Thus, the treatments are actually “Negativity Reduced” and “Positivity Reduced” rather than “more happy” and “more sad,” for example.

Randomization

Even if we accept manipulation, Facebook still randomized. That is, some people got the red pill and some people got the blue pill and nobody knew which one they got. Randomization is prized as the key to causal inference. Indeed, I’m skeptical of the claims made in any research that doesn’t involve randomization. But is randomization ethical? Yes. If we want to study something (that is that we buy the premise of the research project), then randomization is the only ethical way to distribute a limited resource (and we need to limit that resource, the treatment, in order to understand causation statistically). There’s also an important point in the sample size used in the study (700,000), which is a tiny fraction of Facebook’s 1 billion plus user base.

This experiment was interesting and worth conducting (in the eyes of its authors and apparently all of us) but Facebook did not expose everyone on the site. Would that have made things better or worse? I don’t think it would change anything ethically, but if everyone on the site was in the experiment, it would fit much more closely with simply changing the algorithm for everyone. And what if the study involved within-subjects comparisons (that is, giving the sad feed people the happy feed later, and vice versa)? This actually probably would have been even more interesting (e.g., to see if the happy feed later made up for the effect of the earlier sad feed), but also would have eliminated any issues with perceived lack of fairness in assigning some people to be made sad. It would have just been a nice evaluation of standard business practice. That said, I don’t think an A/B experiment, which have been the hot new thing in tech for about years, is fundamentally unethical.

Oversight

The Facebook study was apparently approved by an institutional review board (IRB). IRBs govern the conduct of human subjects research in the United States under regulations laid out in 45 CFR 46. Generally, IRB approval (or the more lax oversight of IRB exemption), means that human subjects research has wide latitudes to proceed. An IRB review process is a stamp of approval or, more accurately, a stamp of legal protection that the institution overseeing the research will defend the research including any negative consequences thereof. IRBs are interesting in that they vary widely in what they allow and do not allow and the amount of pressure they place on researchers to modify study procedures.

And, they’re not a universal concept nor necessarily seen as an essential part of social science research. I currently work in a country that does not have IRB-like institutions for social science research. The National Academy of Sciences, who coincidentally happen to publish PNAS, have also published a report, coincidentally chaired by an editor of PNAS, Susan Fiske, about the need for significant modifications to human subjects protections and IRB oversight in the social and behavioral sciences.

So, in this case, the IRB stamp of approval means that this study is ethically approved by the authors’ institution(s) and has legal protections. But, interestingly enough, this was an unnecessary step from a legal point of view. 45 CFR 46 only applies to government agencies and organizations that receive funding from them. Private corporations have no legal obligation to comply with human subjects protections. So, it’s nice that this study’s authors decided to go to the extra step because they ultimately didn’t have to.

Some people on Twitter have raised the idea that Facebook should have its own IRB, or something similar. This would be similar to biomedical research companies that often have internal IRBs for monitoring medical trials. To me, that doesn’t really seem necessary, but is something they could certainly do if they decide to continue to conduct research without academic partners.

Privacy

Another more gut reaction to this study relates to Facebook’s persistent problems with conveying to its users that it cares about privacy and then actually doing anything to protect their privacy. Frankly, Facebook’s founder, Mark Zuckerberg, has repeatedly said that he doesn’t really believe in privacy and wants everyone to share more. We basically agree to that when we use Facebook; we know the ideology guiding the company’s behavior.

But is it right to publish research based on peoples’ behavior on Facebook? A big part of that question relates to privacy. Do users have a reasonable expectation that their content is private? If we don’t have a reasonable expectation of privacy, then our behavior is public, which means that not only is studying it ethically allowed, but it is explicitly exempted from any IRB oversight per 45 CFR 46 Sec. 46.101(a)(4) or the “observation of public behavior” discussion in 45 CFR 46 Sec. 46.101(a)(4). Furthermore, even if we consider our behavior private, Facebook can still study it because it can be anonymized. The same paragraph in the federal regulations, exemption from IRB oversight comes when data “are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.”

So, if we expect our behavior on Facebook is public, there’s nothing wrong with studying it and, even if private, Facebook can use it if identifying information is removed. But what about the intervention itself (e.g., the manipulation of the feed algorithm)? I would say this is a legal gray area and also an ethical one. If the study were observational, that is the researchers found out about the algorithm A/B test and then mined user data in quasi-experimental treatment and control groups, it would clearly be simple analysis of public, or anonymized private, data. There would be no legal or ethical concerns. Whether experiments fall under the heading of “Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures” (45 CFR 46 Sec. 46.101(a)(2)) is a good question and one that I repeatedly faced when dealing with IRB in the United States. But, we have to remember, that Facebook doesn’t have to comply with these specific regulations here; they simply need to follow general ethical principles in human subjects research.

Ethics

The history of ethical principles for human subjects research has a long history, rooted mostly in the medical tradition and the abuses of prisoners during the holocaust and in US government-sponsored medical studies (e.g., the Tuskegee syphilis experiment). These principles are codified in The Belmont Report, which I consider required reading for anyone working in the social sciences. (I spend an entire week working on it, and broader ethical questions, in my Experimental Methods Seminar). Some people also point out that human subjects research should comply with the Declaration of Helsinki Declaration, a somewhat similar international standard. But the Helsinki Declaration is about medical research and, in my view, medical research and social/behavioral research merit rather fundamentally different ethical discussions. So, even if the Facebook study doesn’t have to follow 45 CFR 46, most scientists would argue that the study should comply in principle with The Belmont Report

Belmont lays out three basic principles: “respect for persons,” “beneficence,” and “justice”. Belmont then applies these principles to three key study features: informed consent, assessment of risks and benefits, and selection of subjects. In the case of the Facebook study, I think these issues are relatively easy to tackle:

  1. Respect for persons primarily relates to risks and harm, as well as to informed consent. More significantly, this principle lays out explicit demands to protect vulnerable individuals and those incapable of consent (codified in 45 CFR 46, as prisoners, children, pregant women, fetuses, and neonates). The other part of “respect for persons” relates to voluntary participation (an issue I take up in detail in the next section).
  2. Beneficence is straightforward. In short, we should not harm human subjects, unless the benefits (individually or collectively outweigh our risk-weighted expectations of the amount of harm). In the Facebook study, some of the participants were possibly harmed (by feeling negative emotions) and further propagated those negative emotions onto their Facebook connections. Whether this harm outweighs benefits is precisely the difficult question. Given that the study involved selectively withholding negative information and selectively withholding positive information (presumably in favor of neutral content), it is hard to see how harm was intended. But, of course, some harm did occur: some felt sad. One could argue that no harm is ever allowed, but such a stringent rule would restrict much research that necessarily offers harm in order to provide a greater good. Here, I suspect the harm to these subjects is outweighed by our gain in knowledge. If Facebook uses this research to try to make its users feel better, there may be benefits to the whole of its user base (>1 billion people) and possibly the users of other social networks that outweighs harm to a random subset of the much smaller pool of research subjects.
  3. Justice relates to, well, justice. This is probably the hardest of the three principles to wrestle with because of competing notions of justice. Arguably, randomization can be leaned on as the ultimate tool for justice. If the research participants were randomly selected and, among them, happy feed and sad feed people were randomly assigned, a certain amount of justice was enforced: depressed people weren’t disproportionately selected, no one was unfairly punished or unfairly benefited; everything was random. This could probably be debated more, but I think it is the least relevant of the three principles for this particular study.

Thus the most significant remaining about the Facebook study relates to informed consent, which I address next.

The final element of the Facebook study that seems to catch peoples’ attention is its lack of explicit, study-specific informed consent. Social scientists in the United States are used to the norm of providing research subjects a very long, boilerplate document that they ask subjects to read and agree to before participating in a study. My PhD institution, Northwestern University, supplies a bunch of template consent forms for every possible research scenario. These forms list, usually item-by-item, all of the 15 or so points covered by 45 CFR 46 Sec. 46.116.

The Facebook did not supply research subjects with this kind of form (remember, they legally didn’t have to). Gizmodo characterized the visceral reaction to this study feature as: “there’s something a bit creepy about Facebook using nearly three quarters of a million regular users as psychological test subjects, without their ever knowing it.” The study authors defended that choice as follows:

it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research. (p.8789).

The relevant parts of the Data Use Policy state:

we may use the information we receive about you: … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

Granting us permission to use your information not only allows us to provide Facebook as it exists today, but it also allows us to provide you with innovative features and services we develop in the future that use the information we receive about you in new ways.

And here’s another key part of the Data Use Policy:

While you are allowing us to use the information we receive about you, you always own all of your information. Your trust is important to us, which is why we don’t share information we receive about you with others unless we have: received your permission; given you notice, such as by telling you about it in this policy; or removed your name and any other personally identifying information from it.

Note the “or” (emphasis added above). In essence, by agreeing to use Facebook, you’ve given Facebook the right to use your data however they want, assuming it is de-identified. Thus, we’ve already consented to using and publishing our data.

But, does the Data Use Policy constitute “informed consent”? Of course, it does not look like informed consent that most social scientists are used to (but it doesn’t have to, because Facebook isn’t goverend by 45 CFR 46). Thus, the only question is whether it is consistent, in principle, with The Belmont Report (or another comparable code of research ethics).

Informed consent is kind of an odd principle, in some ways. In academic research in the United States, the details of 45 CFR 46 have to be followed very closesly. But these are also flexible. For example, studies of voter mobilization (which send postcards, phone calls, door-knockers, etc.) to potential voters to try to influence them to vote almost never involve informed consent. As an extreme example, Costas Panagopoulos paid citizens to vote without first obtaining their consent to participation in the study (which was itself nearly illegal to conduct). Most research using Twitter data does not involve informed consent, because use of Twitter is public behavior (and doesn’t involve experimental intervention). Thus, a study might be exempted from informed consent procedures if the outcome is public behavior (which, as we discussed above, Facebook activity might be).

Thus, the question is whether informed consent was necessary here or, if as the authors argue, the broad Data Use Policy constitutes implicit informed consent. As I’ve alluded to, A/B testing is standard business practice in tech and I think users should have a reasonable expectation that they will be experimented on even if that is not explicitly stated in Terms of Use or a Privacy Policy. Experimentation is how companies innovate and improve their services, so by using a service we implicitly consent to buy whatever version of their product they are going to offer. And that product might change over time and be different for different people.

Thus I am not convinced Facebook needed to obtain explicit informed consent for this study. They manipulated something we know they already manipulate and then used the resulting data in accordance with their Data Use Policy. One suggestion is that it might be reasonable for Facebook, Google, and other tech firms to explicitly state that they will conduct research (including research that involves A/B testing or other interventions in the user experience). They probably haven’t considered that such work presents ethical questions (even if those ethical questions do not trespass into the legal matters typically covered by Terms of Use documents).

Some have argued that Facebook should have “debriefed” subjects, to tell this that they were in a study and allow them to opt-out of participant post facto. This would be the norm in IRB-governed psychological research conducted by universities. Principles of informed consent dictate that consent can be revoked. For example, even if I initially agree to participate in a study, I can later decide I no longer want to participate. Most IRBs thus dictate that consent can be revoked and that continued participation cannot be coercive (e.g., a researcher cannot withhold subject compensation due to study non-completion). Facebook would likely argue that the Data Use Policy provides indefinite consent and the only way to revoke that consent is by removing one’s Facebook account.

This highlights the tension between research and business practice: in the tech world, if I want to remove my consent, I have to discontinue using a product entirely. So, what we hit on here is the uncomfortable nexus of human subjects research principles (which are derived from medical malfeasance) and standard business practice in the tech world. Coverage of the Facebook study highlights this novel territory that both researchers and tech developers are entering. My view is that Facebook is on relatively safe ground in this case. But it’s good to see that we’re having a conversation about research principles here. I hope the conversation enriches everyones’ perspectives.



political science experiments ethics facebook research

Creative Commons License Except where noted, this website is licensed under a Creative Commons Attribution 4.0 International License. Views expressed are solely my own, not those of any current, past, or future employer.