Computational innovation may improve health care by creating stores of data vastly superior to those used by traditional medical research. But before patients and providers “buy in,” they need to know that medical privacy will be respected. We’re a long way from assuring that, but new ideas about the proper distribution and control of data might help build confidence in the system.
William Pewen’s post “Breach Notice: The Struggle for Medical Records Security Continues” is an excellent rundown of recent controversies in the field of electronic medical records (EMR) and health information technology (HIT). As he notes,
Many in Washington have the view that the Health Insurance Portability and Accountability Act (HIPAA) functions as a protective regulatory mechanism in medicine, yet its implementation actually opened the door to compromising the principle of research consent, and in fact codified the use of personal medical data in a wide range of business practices under the guise of permitted “health care operations.” Many patients are not presented with a HIPAA notice but instead are asked to sign a combined notice and waiver that adds consents for a variety of business activities designed to benefit the provider, not the patient. In this climate, patients have been outraged to receive solicitations for purchases ranging from drugs to burial plots, while at the same time receiving care which is too often uncoordinated and unsafe. It is no wonder that many Americans take a circumspect view of health IT.
Privacy law’s consent paradigm means that, generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. Some of the brightest minds in cyberlaw have focused on innovation designed to enable such self-protection. For instance, interdisciplinary research groups have proposed “personal data vaults” to manage the emanations of sensor networks. Jonathan Zittrain’s article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
If individuals had enough time to manage their personal data the way they manage their checkbooks and gardens, perhaps the consent paradigm would be a good foundation for addressing public concerns about privacy. If applicants could easily bargain with would-be employers over privacy, or patients with hospitals, perhaps we could rely on them to protect their interests. But actual occurrences of such acts of self-assertion and self-protection are rare. Given the frequently abstract benefits that privacy and reputational integrity afford, they are often traded away for competitive economic advantage. This process further erodes societal expectations of privacy.
A collective commitment to privacy is far more valuable than a private, transactional approach that all but guarantees a race to the bottom. If such a collective commitment does not materialize, record systems will only deserve trust if they become as transparent as the patients and research subjects they profile. Given corporate assertion of trade secrecy (and even privacy rights), reciprocal transparency will not be easy to achieve. Nevertheless, repeated breaches, fraud, and data meltdowns in the US should provoke an alliance of socially responsible researchers to lobby the US government to set minimal standards of reciprocal transparency and auditing. Consumers can only trust innovators if they can understand what is being done with data. As we become “transparent citizens” (as Joel Reidenberg puts it), we should demand that the corporate, university, and governmental authors of that trend reciprocate, and become more open about the data they gather.
Fortunately, as a recent presentation by Deborah Peel reminded me, there is significant audit authority built into the recent HITECH act which may curb some abuses. Audits will become increasingly important as a “wild west” of health data is excavated by scrapers, marketers, and other data miners.
Consider, for instance, the following scenario: contributors to the medical website PatientsLikeMe.com found that “Nielsen Co., [a] media-research firm . . . was ‘scraping,’ or copying, every single message off PatientsLikeMe’s private online forums.” Had the virtual break-in not been detected, health attributes connected to usernames (which, in turn, can often be linked to real identities) could have spread into numerous databases. A reciprocal transparency paradigm would require all those harboring health data to have some certified indication of its legitimate provenance. Data would not be allowed to persist without certification of its provenance.
Unforeseen spread of inaccurate or inappropriate health data is not just a problem for those who want to avoid getting solicitations for burial plots after a sensitive appointment. Given law enforcement exceptions to medical privacy laws and regulations, it should come as little surprise that the government claims that “a 2005 law authorizes it to monitor and record all prescription drug use by all citizens via so-called “Prescription Drug Monitoring Programs.” Such programs may just be the tip of an iceberg of new domestic intelligence programs that rely on private companies to act as “big brother’s little helpers.”
Whenever health data is fed into an evaluative profile of an individual, there should be safeguards in place to assure that the data is accurate, and that the resulting profile is, if at all possible, not used to harm or disadvantage the individual. Without assurances like these, we can count on continued resistance to the development of health data infrastructures.