what the Facebook mood experiment says about current research ethics

(Washington, DC) Our ethical rules and procedures now badly fit the actual practices of research–burdening some inquiries that should be treated as free while allowing other studies to do real damage without any oversight at all. The Facebook “mood experiment” exemplifies these problems.

The case is well known, but I will summarize: Advised by a small group of academic researchers, including Cornell professor Jeffrey Hancock, Facebook experimented by changing the algorithms that select posts for users’ newsfeeds so that some users saw more happy material, and others saw more sad material, than they would have seen otherwise. It turned out that seeing happy stories led people to post more happy content of their own (contrary to some previous findings that happy news makes us feel resentful). The Cornell University Institutional Review Board (IRB), which is charged with pre-reviewing “research,” did not review this study because the professors were deemed to be insufficiently involved, e.g., they would not see the users’ data. Hancock et al. published the results, prompting an international outcry. Both the scholars and Facebook were denounced (and the former even threatened) for manipulating emotions without consent or disclosure.

I believe that the scholars were involved in “research” and so should have been reviewed by Cornell’s IRB. Given current principles of research ethics (as I understand them), the IRB should have required more disclosure and consent than Facebook actually provided. (But see a contrary argument here.) The key point is that users were influenced by the experimental manipulation—to a very small degree, but the magnitude of the impact could not be known in advance and was not actually zero. People were affected without being asked to participate or even told afterward what had been done to them. The scholars should have made sure that research subjects gave consent. Otherwise, they should have dropped their association with Facebook.

But I also believe that current IRB rules and procedures now poorly fit the realities of research.

On one hand, I am concerned about some over-regulation by IRBs. I start with the presumption that when we ask adults questions or observe them and publish our thoughts, that is an exercise of free speech protected by the First Amendment. IRB review of a research study that involves asking questions seems akin to prior censorship of a newspaper. In both cases, the writer could violate rights or laws, but then the affected parties should seek legal remedies. The IRB should not pre-review research that merely involves talking to or watching adults and writing what one observes.*

I realize that academic research based on mere conversation or observation can be harmful. Consider the “super-predator” theory of violent crime, which led to terrible social policies. But the problem with that research was its conclusion, not its method. An IRB has no purview over conclusions (or premises, or ideologies). We must respond to bad ideas with counter-arguments, not with prior censorship.

By the way, I have no complaint about the actual oversight of our own very capable and efficient IRB, which approves about a dozen studies of my team each year. My point is rather an abstract, principled one about the right to ask questions and write whatever one concludes.

On the other hand, manipulating people without their consent is problematic, and that is happening constantly and pervasively in the age of Big Social Science, microtargeting, and “nudges.” When academics experiment on people, they are generally subject to prior review and tough rules. But most social experiments are not done by academics nowadays. If Hancock et al. had chosen to stay clear of the Facebook study, Facebook might well have gone ahead anyway—with no review or scrutiny whatsoever.

One might argue that professors should be regulated more than companies are, because the former receive federal support and may have tenure, which protects them even if they act badly. But I am more worried about companies than about professors, because: 1) companies also frequently receive government support; 2) they may conduct highly invasive experiments without even disclosing the results, whereas professors like to publish what they find; and 3) some companies have enormous power over customers. For example, quitting Facebook over an ethical issue would impose a steep cost in terms of missed opportunities to communicate. Networks have value proportional to the square of their users, which implies that you cannot just decline to use an incumbent network that has more than a billion users. Agreeing to its “terms and conditions” is not exactly voluntary.

Philosophically, I’d be in favor of removing IRB review of research unless the research involves tangible impact on subjects, while regulating corporate research that involves experimental manipulation so that disclosure and consent are always required. I am not sure if the latter could be done effectively, fairly, and efficiently–and I am not holding my breath for anyone even to try.

*Notes: 1) I am not arguing the IRB review is literally unconstitutional. The IRB’s legally legitimate authority flows from contracts between the university and the government and between the university and its employees. My point is that First Amendment values ought to be honored. 2) When academics pay research subjects, that creates a financial relationship that the university should probably oversee on ethical grounds. 3) I am not sure about minors. The First Amendment argument still applies when subjects are minors, yet there seems to be a case for the university’s protecting human subjects who are under 18.