Filed under: Bioethics, Clinical Research, Drugs & Devices, Intellectual Property, Privacy, Transparency
We are very pleased to welcome Dana Darst, a Master of Science in Jurisprudence candidate in Health Law and Intellectual Property Law here at Seton Hall, to the blog today.
In recent years, biopharmaceutical research and development organizations have established partnerships with academic institutions and start-up biotechnology companies to drive external innovation, complementary to their own in-house advancements in life sciences. Companies such as Pfizer, Johnson & Johnson and Bayer have opened innovation centers of excellence globally, in start-up biotechnology- and academia-rich hubs such as Boston, San Francisco and Shanghai, as part of an effort to accelerate new product development and commercialization.
More recently, the industry has commenced driving innovation via sharing clinical study protocols and patient-level treatment information at the request of qualified external researchers. An objective of this undertaking is to enhance public health via data transparency. This may increase efficiencies by helping researchers avoid unnecessary use of resources for new studies, when relevant clinical outcomes data exists from previous studies. In addition, it may reduce risks for future research subjects.
On January 30, 2014, Janssen Research and Development, LLC (a Johnson & Johnson subsidiary) and The Yale School of Medicine’s Open Data Access Project (YODA) announced a pioneering partnership model for sharing clinical trial data. Under their agreement, YODA will review all clinical trial data requests on Janssen’s behalf, as an independent third-party. In a press release, J&J’s Chief Medical Officer, Joanne Waldstreicher, MD, stated that their collaboration will “[e]nsure that each and every request for access to [their] pharmaceutical clinical data is reviewed objectively and independently.” She further stated that “[t]his represents a new standard for responsible, independent clinical data sharing.” Other biopharmaceutical companies sharing clinical trial data do so by reviewing data requests directly as they are received from qualified external researchers. Also, some have voluntarily adopted the Principles For Responsible Clinical Trial Data Sharing, jointly published by the Pharmaceutical Research Manufacturers of America (PhRMA) and European Federation of Pharmaceutical Industries and Associations (EFPIA), and implemented on January 1, 2014.
Under the PhRMA-EFPIA guidelines, “Biopharmaceutical companies are committed to enhancing public health through responsible sharing of clinical trial data in a manner that is consistent with the following Principles”: (1) Safeguarding the privacy of patients, (2) Respecting the integrity of national regulatory systems, and (3) Maintaining incentives for biomedical research. The guidelines provide a framework for life sciences companies to request patient-level data and study protocols. Additionally, the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA) have proposed policies to increase transparency of clinical trial data. As Carl Coleman discussed here, both agencies have addressed the importance of patient and study de-identification.
So, what potential implications might clinical trial data sharing have on biopharmaceutical innovation? The Institute of Medicine (IOM) et al. recently published a report, Discussion Framework for Clinical Trial Data Sharing: Guiding Principles, Elements and Activities, as a framework for further discussion and public comment on how data from clinical trials might best be shared. It provides four guiding principles for consideration which include (1) respecting individual participants, (2) maximizing benefits to participants in clinical trials and to society, while minimizing harm, (3) increasing public trust in clinical trials, and (4) carrying out sharing of clinical trial data in a manner that enhances fairness.
The report does not provide conclusions or recommendations, which are expected to be developed and published approximately 17-months from the project’s start in August 2013. However, the IOM Committee, which includes a diverse group of representatives from government, charitable foundations, academia, healthcare institutions and private industry, has posed several potential implications of trial data sharing for consideration.
From a societal perspective, sharing clinical trial data could increase accuracy, reduce bias and provide a more comprehensive picture of a drug’s benefits and risks. In addition, data sharing could potentially improve efficiency and safety of the clinical research process. For example, it could reduce potential duplication of efforts and costs of future studies, and help to avoid unnecessary harm to patients. Furthermore, it may provide additional information to healthcare professionals and patients that can be utilized to make better informed decisions.
Alternatively, data sharing could lead to invasions of patient privacy or breaches of confidentiality, which may ultimately harm participants, either socially or economically. Moreover, it could reduce incentives for study sponsors to invest their limited resources (e.g. time, budget, FTEs) on additional trials, which could ultimately inhibit innovation. As the IOM Committee explains:
For example, data sharing might allow confidential commercial information (CCI) to be discerned from the data. Competitors might use shared data to seek regulatory approval of competing products in countries that do not recognize data exclusivity periods or that do not grant patents for certain types of research.
Sharing clinical study protocols and patient-level trial data could have benefits for society and healthcare, which outweigh the risks. The IOM is planning to include an analysis of risks and benefits in their final report. As academic, life sciences and start-up biotech entities increasingly share industry-driven trial data, a prudential approach should be taken to protect the confidentiality and intellectual property of all stakeholders involved. Specifically, data should be adequately redacted prior to disclosure to eliminate confidential information—as recommended by the EMA and U.S. FDA. Biopharmaceutical Innovation is driven by authors and inventors that rely on exclusive, protected rights granted for limited times. As the IOM committee works toward establishing guidelines, adherence to their guiding principles to address these protected rights will be vital.
To obtain additional information or provide public comments on the IOM project, visit their website at: http://www8.nationalacademies.org/cp/projectview.aspx?key=49578.
What will New York Court of Appeals protect: your Medical Records from Disclosure or Hospitals from Liability?
Such duty is based on an individual’s right to privacy and on the general principle that people seeking medical help should not be hindered or inhibited by fear that their medical conditions will become known to others. Such assurance is necessary in order for the doctor to provide proper treatment.
AMA’s Code of Medical Ethics states that the information disclosed to a physician during the course of the patient-physician relationship is confidential. Hippocratic oath states “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”
Neither AMA’s ethical guidelines nor Hippocratic oath, however, is binding by law.
So to what extent can we really trust that out private information will not be shared with the rest of the world? Under the existing law such assurance seems to be quite vague.
HIPAA, for example, prohibits healthcare providers from disclosing personal health information. Healthcare providers seem to strictly adhere to the Act (sometimes overzealously). Similarly, New York Public Health Law § 4410 imposes a duty upon healthcare providers to maintain confidentiality of patient treatment records.
These statues, however, do not create a private cause of action, which means that if your health information was improperly disclosed to third parties you can’t go to court and sue a healthcare provider for a violation of the statute. Both statues mainly provide a standard under which doctors and hospitals should operate.
Therefore, if you are in New York and your medical information has been disclosed, your only remedy is a common-law claim of breach of fiduciary duty of confidentiality, which springs from the implied covenant of trust and confidence that is inherent in the physician patient relationship, and the breach of which is actionable as a tort.
But in an unexpected twist, which the Court Of Appeals is scheduled to address, these common law protections could soon be effectively eviscerated.
Under the common law doctrine of respondeat superior an employer is vicariously liable for the actions of its employees, but only when they commit a negligent act within the scope of their employment and in furtherance of an employer’s business. Thus, if a hospital employee accidentally sends your medical records to your neighbor, the hospital will be liable for this act. But what would happen if a nurse looked into your medical chart, learned that you have an STD and called your girlfriend to inform her about it? In this scenario she does not act within the scope of her employment; she commits a willful wrong motivated by personal interest. Will the hospital be liable and should it?
The Appellate Division, Third Department recognized that a reliance on the traditional doctrine on respondeat superior in such a case will render protection of medical information a nullity because in most cases wrongful disclosure would be made outside of employee’s scope of employment. In Doe v. Cmty Health Plan-Kaiser Corp. (709 N.Y.S.2d 215 (3d Dep’t 2000)) the court has explained that a corporation always acts through its agents, servants and employees and should be directly responsible if patient’s confidences are breached.
The Second Circuit recently declined to follow this one precedent in Doe v. Guthrie Clinic, LTD. and on March 25, 2013, certified the issue to the New York Court of Appeals.
If the Court of Appeals rules that the corporation should be liable, the ruling will dramatically expand the doctrine of respondeat superior. Such an expansion may very well be justified in the light of the high sensitivity of medical information, but when the law creates one exception there is always the risk of going down the slippery slope.
If the Court, on the other hand, adheres to the traditional doctrine, the protections afforded to patients’ healthcare information will continue to be limited, and if our medical information that the hospital is under the duty to protect appears on facebook the hospital may simply wash its hands of any responsibility.
Today the Supreme Court will hear oral arguments in IMS Health v. Sorrell. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**
Privacy Promotes Freedom of Expression
Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
In an article entitled “Monitoring America,” Dana Priest and William Arkin describe an extraordinary pattern of governmental surveillance. To be sure, in the wake of the attacks of 9/11, there are important reasons to increase the government’s ability to understand threats to order. However, the persistence, replicability, and searchability of the databases now being compiled for intelligence purposes raise very difficult questions about the use and abuse of profiles, particularly in cases where health data informs the classification of individuals as threats.
First, a little background. We traditionally think of law enforcement as needing some kind of probable cause to ground or justify the pursuit of an investigation. However, with the rise of the new Information Sharing Environment (often enacted by fusion centers, which provide one-stop shopping for access to data), a much broader set of law enforcement prerogatives is emerging. Fusion centers have promoted a domestic intelligence apparatus, which is designed not merely to solve crimes but also to generate a wide range of knowledge which could lead to the deterrence and detection of “all threats, all crimes, all hazards.”
The Department of Homeland Security has taken a number of innovative steps to deputize monitoring of individuals, asking personnel ranging from local law enforcement to cable repairmen to hotel cleaners to be on the alert for suspicious activity. Once such activity is detected, the detector can in some cases file a persistent Suspicious Activity Report. These SARs are entered into an FBI database, and quite possibly inform many other counterterror, intelligence, and even private sector initiatives. Arkin & Priest’s story gives a sample Suspicious Activity Report, and speculates about how its creation may affect the object of the profile:
The FBI is building a vast repository controlled by people who work in a top-secret vault on the fourth floor of the J. Edgar Hoover FBI Building in Washington. This one stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. What they have done is appear to be acting suspiciously to a town sheriff, a traffic cop or even a neighbor.
[For an example of what might go in the database, consider] Suspicious Activity Report N03821 says a local law enforcement officer observed “a suspicious subject . . . taking photographs of the Orange County Sheriff Department Fire Boat and the Balboa Ferry with a cellular phone camera.” The confidential report, marked “For Official Use Only,” noted that the subject next made a phone call, walked to his car and returned five minutes later to take more pictures. He was then met by another person, both of whom stood and “observed the boat traffic in the harbor.” Next another adult with two small children joined them, and then they all boarded the ferry and crossed the channel.
All of this information was forwarded to the Los Angeles fusion center for further investigation after the local officer ran information about the vehicle and its owner through several crime databases and found nothing. Authorities would not say what happened to it from there, but there are several paths a suspicious activity report can take:
At the fusion center, an officer would decide to either dismiss the suspicious activity as harmless or forward the report to the nearest FBI terrorism unit for further investigation. At that unit, it would immediately be entered into the Guardian database, at which point one of three things could happen:
The FBI could collect more information, find no connection to terrorism and mark the file closed, though leaving it in the database. It could find a possible connection and turn it into a full-fledged case. Or, as most often happens, it could make no specific determination, which would mean that Suspicious Activity Report N03821 would sit in limbo for as long as five years, during which time many other pieces of information about the man photographing a boat on a Sunday morning could be added to his file[.]
[That data includes] employment, financial and residential histories; multiple phone numbers; audio files; video from the dashboard-mounted camera in the police cruiser at the harbor where he took pictures; and anything else in government or commercial databases “that adds value,” as the FBI agent in charge of the database described it. That could soon include biometric data, if it existed; the FBI is working on a way to attach such information to files. Meanwhile, the bureau will also soon have software that allows local agencies to map all suspicious incidents in their jurisdiction.
Given the expansive reservoirs of data already accessible to fusion centers, I would not be surprised if they took the position that health records “add value” to the data gathering. Civil libertarians can object to many types of data gathering, but for purposes of this post, I would like to focus on healthcare data. First, to what extent can a health condition itself give rise to a Suspicious Activity Report? Secondly, are there any concerted efforts to deputize medical personnel to report on suspicious activity? Finally, and I believe most importantly, how is the vast store of healthcare data presently associated with individuals utilized by the data mining programs of the surveillance state?
We daily learn of troubling data gathering practices online. For example, Arvind Narayanan has described rather indiscriminate data gathering by third parties:
The Facebook “like” button is a prominent . . . example of third-party tracking not directly related to behavioral advertising. . . . Facebook can keep track of all the pages you visit that incorporate the button, whether or not you click it. Did you know, for example, that the UK National Health Services website has the like button, among other trackers, on all their disease pages?
One need only visit the Wall Street Journal’s recent series on privacy to realize that all manner of health-related data can be generated about an individual with little to no restrictions imposed by HIPAA or effectively enforced by the FTC. To take one example, consider the scraping (copying) of data at a site called PatientsLikeMe:
At 1 a.m. on May 7, the website PatientsLikeMe.com noticed suspicious activity on its “Mood” discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was “scraping,” or copying, every single message off PatientsLikeMe’s private online forums.
Who knows how many incidents like this go unreported each year? Finally, the government itself is keeping a record of prescription drug use, which apparently was used after the Virginia Tech shooting. Law enforcement exceptions to HIPAA (and, presumably, HITECH) may give an official imprimatur for similar activities even if they involve “covered entities.”
The clash of intelligence prerogatives and health privacy always raises difficult issues. For now, I would just like to make one claim about the need for the government to be forthright about whether it is collecting health care data while profiling citizens. Such data gathering should not be what David Pozen calls a “deep secret;” that is, citizens should not be “in the dark about the fact that they are being kept in the dark.” Rather, we need to understand whether this very personal and important data is being commandeered to fight an “enemy within.”
There are broader principles for fair disclosure of the workings of the surveillance state. First, people are all too eager to sign up for new health “apps” and affinity groups without having any sense of how these activities and affiliations can affect their future. There is still a lazy public/private distinction affecting far too much of consumer conduct; I hear so-called internet experts wondering why anyone would worry about data stored by a private company because “they’re not the government.” Arkin & Priest have consistently shown that the public/private distinction is evanescent at best, a confounding development in social affairs that leaves libertarians sounding like communists.
Julie Cohen’s recent article in Social Research observes that there is a much larger political economy of surveillance that has accelerated both data gathering and profiling:
Devaluation of privacy is bound up with our political economy and with our public discourse about information policy in important ways that have little or nothing to do with official conduct. . . . Flows of data are facilitated by corporate data brokers like ChoicePoint, Experian, and Axciom. To help companies (and governments) make the most of the information they purchase, an industry devoted to “data mining” and “behavioral advertising” has arisen; firms in this industry compete with one another to develop more profitable methods of sorting and classifying individual consumers.
In the United States, a number of federal agencies have awarded multimillion dollar contracts to corporate data brokers to supply them with personal information about both citizens and foreign nationals. Privacy restrictions that limit the extent to which the government can itself collect personal information generally do not apply to such purchases at all. The government has deployed secrecy to great effect where these initiatives are concerned, with the result that we still understand too little about many of them. Legal regimes purporting to guarantee official transparency are in fact indeterminate on how much openness to require.
These processes let important decisionmakers in both the private and public sectors exist behind a “one way mirror.” Even if full transparency would compromise data gathering, citizens must know whether certain critical information (including health data) is being commandeered by the domestic intelligence apparatus.
On January 16, 2009, the Department of Health and Human Services (HHS) and CVS entered into a resolution agreement requiring CVS to pay a $2.25 million fine and implement a corrective action plan for “potential violations of the HIPAA [The Health Insurance Portability and Accountability Act of 1996] privacy rule.” Why? CVS had allegedly been placing prescription bottles and labels into dumpsters that were accessible to the public. The bottles/labels contained protected health information (PHI), which CVS was required to safeguard under federal law.
Although HHS appears to regard the settlement as a success, given its prominence on the HIPAA enforcement section of HHS’s website, it is nothing of the sort. The agreement provides that CVS “expressly den[ies] any violation of HIPAA or the Privacy Rule, and further den[ies] any wrongdoing,” while HHS does not concede that CVS is “in compliance with the Privacy Rule.” HHS did agree with itself, however, releasing an FAQ (accompanying the press release) stating that under its Privacy and Security Rules: “covered entities are not permitted to simply abandon PHI or dispose of it in dumpsters or other containers that are accessible by the public or other unauthorized persons.”
Why is this old news important? This week I had a prescription filled at my local CVS pharmacy in Livingston, New Jersey. While standing at the pharmacy I noticed that all of the filled prescriptions were stored directly behind the counter in plain view of any customer. Each prescription was inside a small bag to which a customer receipt was attached. The receipts in the front row of the storage bins were readable from the counter. The receipts contain protected health information (PHI) that is subject to the Privacy and Security Rules of HIPAA including:
1) Full name,
3) Telephone number,
4) Day and month of birth,
5) Drug name and dosage, and
HHS maintains the authority for civil enforcement of violations of the Privacy and Security Rules promulgated pursuant to HIPAA. So, why is it that CVS allows the public to view its customers’ PHI in violation of HIPAA even while still subject to the corrective action plan for its prior alleged violations? Well, I asked the pharmacist on duty. The pharmacist acknowledged that it was a problem that the PHI could be viewed from the counter. However, CVS was expecting to remodel and “hopefully” the shelf would be placed farther away to render the PHI unreadable. Upon requesting the contact information for CVS’s privacy officer, the pharmacist readily provided such information and stated that she would “appreciate” someone actually reporting the apparent violation.
HHS was recently provided with additional enforcement tools under the HITECH provisions of the American Recovery and Reinvestment Act of 2009. Unfortunately, it does not appear that HHS is serious about enforcing its own regulations or resolution agreements; nor, if the flagrantly violative placement of prescriptions is indicative of mindset, is CVS serious about HIPAA compliance.