Today the Supreme Court will hear oral arguments in IMS Health v. Sorrell. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**
Privacy Promotes Freedom of Expression
Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
There is an old tension between privacy and the First Amendment, best crystallized in Eugene Volokh’s effort to characterize privacy protections as the troubling right to stop others from speaking about you. Neil Richards has dissected the flaws in Volokh’s Lochneresque effort to reduce the complex societal dynamics of fair data practices to Hohfeldian trump cards held by individuals and corporations. Societies reasonably conclude that certain types of data shouldn’t influence certain types of decisions all the time. And courts have acquiesced, allowing much “of the vast universe of speech [to] remain untouched (and thus unprotected) by the First Amendment.”
No algorithm can decide what information, or access to information, is protected by the First Amendment. That’s a matter of values, and there are many normative foundations for protecting free expression, including the promotion of personal autonomy, democracy, and truth. An emerging field of scholarship has demonstrated that all those values are promoted by well-crafted privacy laws.
For example, Katherine Strandburg has called for First Amendment regulation of “relational surveillance,” including “attempts to use [traffic data] about communications to ferret out suspect groups and investigate their membership and structure:”
Despite the rising importance of digitally mediated association, current Fourth Amendment and statutory schemes provide only weak checks on government. The potential to chill association through overreaching relational surveillance is great. . . . [T]he First Amendment’s freedom of association guarantees can and do provide a proper framework for regulating relational surveillance and suggests how these guarantees might apply to particular forms of analysis of traffic data.
As Danielle Citron and I have documented, this kind of surveillance has already had troubling chilling effects for political groups on both left and right. Our co-blogger Dan Solove has also argued convincingly that “there are doctrinal, historical, and normative justifications for developing” First Amendment-based limits on the “countless searches and seizures involving people’s private papers, the books they read, the websites they surf, and the pen names they use when writing anonymously.” Marc Jonathan Blitz has explored the intersection of free speech and privacy values in Stanley v. Georgia, a case that guaranteed First Amendment protection for obscene materials “when read or viewed by a person in her own home.” Paul Schwartz paved the way for much of this work.
The “Privacy as a First Amendment Value” scholarship has so far focused on deterring undue state surveillance, and the casual observer of Sorrell might believe that the same concerns are not raised by IMS Health’s data collection. However, the state of Vermont is the very entity requiring collection of the prescription data, as Judge Debra Ann Livingston’s eloquent dissent (in the case granted cert) highlights:
Vermont’s law regulates the dissemination of confidential information—specifically, PI data-and the process by which it is collected and sold. Because section 17 [the challenged law] targets that process rather than . . . [publishing and promotion] itself, understanding the sequence of events section 17 regulates—that is, the process by which PI data travels from the prescription pad to the hands of a pharmaceutical detailer—“is crucial to understanding the statute’s legal status.”
Pursuant to Vermont law, every time a pharmacy fills a prescription within the state, it is required to collect certain information about the doctor, the patient, and the medication being prescribed. . . .Troubled by this sequence of events whereby otherwise confidential information ends up in the hands of pharmaceutical detailers. . . .Vermont enacted its prescription confidentiality law. [citations omitted]
In other words, Vermont is trying to control a process of information creation that the state itself began. In his work on the new “information sharing environment” in the anti-terror field, Jon Michaels has shown that private data collection can be almost effortlessly merged with public files to monitor (and ultimately deter) “suspect” advocacy. Just as civil liberties groups have called on more careful and calibrated information sharing between homeland security forces and private data miners, there are also compelling reasons to manage the private sector uses of medical records forced into being by state action.
But even if the state did not force pharmacies to keep these records, there would still be an important free expression rationale for allowing a state to keep them private. Vermont has many rural areas, and it is easy to imagine scenarios where a doctor only treats one or a few patients for sensitive medical conditions. Will a person in small village hesitate to join a mental illness support group on Facebook, once she is aware that a data miner knows that there is only one person on psychotropic drugs in her town? The technological tools for matching digital records are staggering. State restrictions on the use of that data (or other forms of tracking) can be an important step toward giving individuals a chance to form and express opinions and affiliations in peace—without fearing an endlessly ramifying series of classifications made and opportunities possibly denied, on account of faceless and secretive data miners.
Tit for Tat for IMS
The secrecy of the data mining business itself should weigh heavily in the minds of the justices as they consider Sorrell. If the data miners win, privacy interests should follow up the case by lobbying states to force data miners to disclose exactly how they maintain their databases, all the terms of their contracts with clients, and business strategies. If the companies quickly squelch such legislation with trade secrecy claims, they should respect individuals who conceive of themselves as businesses, and consider their medical data among the most important of their “trade secrets.”
The Reply Brief from Vermont calls out those challenging its prescriber data law for an opportunistic embrace of the “transparency” mantle:
[W]hile respondents and their amici claim to advocate “transparency,” the commercial trade of prescription data is anything but open. Pharmacies do not tell doctors their information is sold for marketing. Data vendors do not allow dissemination of their “proprietary” data.
The secrecy of the data mining business directly motivates state efforts to limit how much data it can gather. If the state can’t understand exactly how credit scoring companies rank and evaluate customers, it has interest in preventing them from even gathering certain suspect data in order to avoid that data’s misuse. Similarly, in the medical context, the state has no idea what treatment will be given to certain areas once physician-identified data about them is released. One legal expert recently warned that employers “may develop complex scoring algorithms based on electronic health records to determine which individuals are likely to be high-risk and high-cost workers.” What if the same sort of stigmatizing characterizations are raised to the community level, with marketers selling (possibly inaccurate or otherwise unvetted) aggregate characterizations of prescription drug use on a community-by-community level? The state has a strong interest in delaying the dawn of a brave new world of medical record-based characterizations until far more robust infrastructures assuring data accuracy and accountability are developed.
Individuals often do not realize the multiple paths medical data can take in order to get into critical databases. Recently contributors to the medical website PatientsLikeMe.com found that “Nielsen Co., [a] media-research firm . . . was ‘scraping,’ or copying, every single message off PatientsLikeMe’s private online forums.” Had the virtual break-in not been detected, health attributes connected to usernames (which, in turn, can often be linked to real identities) could have spread into numerous databases. All those harboring health data ought to have some certified indication of its legitimate provenance. Such certifications should be regularly audited. Data should not be allowed to persist without certification of its provenance. Until those types of protection are in place, it is in the state’s interest to tightly regulate the transfer of health data, much of which the state itself required to be created.
Balancing Interests in Free Expression
While IMS v. Sorrell is often characterized as a direct clash between privacy and the First Amendment, it is better characterized as a more complex struggle over the ethical conduct of commerce, medicine, and marketing. There are First Amendment values that favor Vermont’s enterprise, and those which support the efforts of IMS Health to gather physician-identified data. But only one side in the case is serious about constructing a balanced and thoughtful reconciliation of the interests of patient, physician, pharma, and marketing stakeholders here. Vermont’s statute may not be perfect, but it at least tries to promote that balance. A victory for the plaintiffs would only accelerate our current trend toward an information environment where powerful corporations create unaccountable databases about individuals and their communities, while cloaking their own practices in trade secrecy.
* “Physician-identified” means that the personally identifiable information about patients is (supposed to be) stripped out of the data, and the data is only associated with particular anonymous patients of particular doctors. One of the key issues in the case is a factual question: how effective is deidentification? Orentlicher observes that, “While the patient’s name is not retrieved [during data aggregation], the data miner does assign a unique number to the patient so that future prescriptions for the patient can be analyzed together.” The NEJM brief notes that advances in computer science have compromised extant security techniques, “casting serious doubt on the power of anonymization” and lesser de-identification technologies. Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failures of Anonymization, 57 UCLA L. Rev. 1701 (2010).
**For the record, the question presented is “Whether a law that restricts access to information in nonpublic prescription drug records and affords prescribers the right to consent before their identifying information in prescription drug records is sold or used in marketing runs afoul of the First Amendment.”