What will New York Court of Appeals protect: your Medical Records from Disclosure or Hospitals from Liability?
Such duty is based on an individual’s right to privacy and on the general principle that people seeking medical help should not be hindered or inhibited by fear that their medical conditions will become known to others. Such assurance is necessary in order for the doctor to provide proper treatment.
AMA’s Code of Medical Ethics states that the information disclosed to a physician during the course of the patient-physician relationship is confidential. Hippocratic oath states “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”
Neither AMA’s ethical guidelines nor Hippocratic oath, however, is binding by law.
So to what extent can we really trust that out private information will not be shared with the rest of the world? Under the existing law such assurance seems to be quite vague.
HIPAA, for example, prohibits healthcare providers from disclosing personal health information. Healthcare providers seem to strictly adhere to the Act (sometimes overzealously). Similarly, New York Public Health Law § 4410 imposes a duty upon healthcare providers to maintain confidentiality of patient treatment records.
These statues, however, do not create a private cause of action, which means that if your health information was improperly disclosed to third parties you can’t go to court and sue a healthcare provider for a violation of the statute. Both statues mainly provide a standard under which doctors and hospitals should operate.
Therefore, if you are in New York and your medical information has been disclosed, your only remedy is a common-law claim of breach of fiduciary duty of confidentiality, which springs from the implied covenant of trust and confidence that is inherent in the physician patient relationship, and the breach of which is actionable as a tort.
But in an unexpected twist, which the Court Of Appeals is scheduled to address, these common law protections could soon be effectively eviscerated.
Under the common law doctrine of respondeat superior an employer is vicariously liable for the actions of its employees, but only when they commit a negligent act within the scope of their employment and in furtherance of an employer’s business. Thus, if a hospital employee accidentally sends your medical records to your neighbor, the hospital will be liable for this act. But what would happen if a nurse looked into your medical chart, learned that you have an STD and called your girlfriend to inform her about it? In this scenario she does not act within the scope of her employment; she commits a willful wrong motivated by personal interest. Will the hospital be liable and should it?
The Appellate Division, Third Department recognized that a reliance on the traditional doctrine on respondeat superior in such a case will render protection of medical information a nullity because in most cases wrongful disclosure would be made outside of employee’s scope of employment. In Doe v. Cmty Health Plan-Kaiser Corp. (709 N.Y.S.2d 215 (3d Dep’t 2000)) the court has explained that a corporation always acts through its agents, servants and employees and should be directly responsible if patient’s confidences are breached.
The Second Circuit recently declined to follow this one precedent in Doe v. Guthrie Clinic, LTD. and on March 25, 2013, certified the issue to the New York Court of Appeals.
If the Court of Appeals rules that the corporation should be liable, the ruling will dramatically expand the doctrine of respondeat superior. Such an expansion may very well be justified in the light of the high sensitivity of medical information, but when the law creates one exception there is always the risk of going down the slippery slope.
If the Court, on the other hand, adheres to the traditional doctrine, the protections afforded to patients’ healthcare information will continue to be limited, and if our medical information that the hospital is under the duty to protect appears on facebook the hospital may simply wash its hands of any responsibility.
Today the Supreme Court will hear oral arguments in IMS Health v. Sorrell. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**
Privacy Promotes Freedom of Expression
Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
In an article entitled “Monitoring America,” Dana Priest and William Arkin describe an extraordinary pattern of governmental surveillance. To be sure, in the wake of the attacks of 9/11, there are important reasons to increase the government’s ability to understand threats to order. However, the persistence, replicability, and searchability of the databases now being compiled for intelligence purposes raise very difficult questions about the use and abuse of profiles, particularly in cases where health data informs the classification of individuals as threats.
First, a little background. We traditionally think of law enforcement as needing some kind of probable cause to ground or justify the pursuit of an investigation. However, with the rise of the new Information Sharing Environment (often enacted by fusion centers, which provide one-stop shopping for access to data), a much broader set of law enforcement prerogatives is emerging. Fusion centers have promoted a domestic intelligence apparatus, which is designed not merely to solve crimes but also to generate a wide range of knowledge which could lead to the deterrence and detection of “all threats, all crimes, all hazards.”
The Department of Homeland Security has taken a number of innovative steps to deputize monitoring of individuals, asking personnel ranging from local law enforcement to cable repairmen to hotel cleaners to be on the alert for suspicious activity. Once such activity is detected, the detector can in some cases file a persistent Suspicious Activity Report. These SARs are entered into an FBI database, and quite possibly inform many other counterterror, intelligence, and even private sector initiatives. Arkin & Priest’s story gives a sample Suspicious Activity Report, and speculates about how its creation may affect the object of the profile:
The FBI is building a vast repository controlled by people who work in a top-secret vault on the fourth floor of the J. Edgar Hoover FBI Building in Washington. This one stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. What they have done is appear to be acting suspiciously to a town sheriff, a traffic cop or even a neighbor.
[For an example of what might go in the database, consider] Suspicious Activity Report N03821 says a local law enforcement officer observed “a suspicious subject . . . taking photographs of the Orange County Sheriff Department Fire Boat and the Balboa Ferry with a cellular phone camera.” The confidential report, marked “For Official Use Only,” noted that the subject next made a phone call, walked to his car and returned five minutes later to take more pictures. He was then met by another person, both of whom stood and “observed the boat traffic in the harbor.” Next another adult with two small children joined them, and then they all boarded the ferry and crossed the channel.
All of this information was forwarded to the Los Angeles fusion center for further investigation after the local officer ran information about the vehicle and its owner through several crime databases and found nothing. Authorities would not say what happened to it from there, but there are several paths a suspicious activity report can take:
At the fusion center, an officer would decide to either dismiss the suspicious activity as harmless or forward the report to the nearest FBI terrorism unit for further investigation. At that unit, it would immediately be entered into the Guardian database, at which point one of three things could happen:
The FBI could collect more information, find no connection to terrorism and mark the file closed, though leaving it in the database. It could find a possible connection and turn it into a full-fledged case. Or, as most often happens, it could make no specific determination, which would mean that Suspicious Activity Report N03821 would sit in limbo for as long as five years, during which time many other pieces of information about the man photographing a boat on a Sunday morning could be added to his file[.]
[That data includes] employment, financial and residential histories; multiple phone numbers; audio files; video from the dashboard-mounted camera in the police cruiser at the harbor where he took pictures; and anything else in government or commercial databases “that adds value,” as the FBI agent in charge of the database described it. That could soon include biometric data, if it existed; the FBI is working on a way to attach such information to files. Meanwhile, the bureau will also soon have software that allows local agencies to map all suspicious incidents in their jurisdiction.
Given the expansive reservoirs of data already accessible to fusion centers, I would not be surprised if they took the position that health records “add value” to the data gathering. Civil libertarians can object to many types of data gathering, but for purposes of this post, I would like to focus on healthcare data. First, to what extent can a health condition itself give rise to a Suspicious Activity Report? Secondly, are there any concerted efforts to deputize medical personnel to report on suspicious activity? Finally, and I believe most importantly, how is the vast store of healthcare data presently associated with individuals utilized by the data mining programs of the surveillance state?
We daily learn of troubling data gathering practices online. For example, Arvind Narayanan has described rather indiscriminate data gathering by third parties:
The Facebook “like” button is a prominent . . . example of third-party tracking not directly related to behavioral advertising. . . . Facebook can keep track of all the pages you visit that incorporate the button, whether or not you click it. Did you know, for example, that the UK National Health Services website has the like button, among other trackers, on all their disease pages?
One need only visit the Wall Street Journal’s recent series on privacy to realize that all manner of health-related data can be generated about an individual with little to no restrictions imposed by HIPAA or effectively enforced by the FTC. To take one example, consider the scraping (copying) of data at a site called PatientsLikeMe:
At 1 a.m. on May 7, the website PatientsLikeMe.com noticed suspicious activity on its “Mood” discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was “scraping,” or copying, every single message off PatientsLikeMe’s private online forums.
Who knows how many incidents like this go unreported each year? Finally, the government itself is keeping a record of prescription drug use, which apparently was used after the Virginia Tech shooting. Law enforcement exceptions to HIPAA (and, presumably, HITECH) may give an official imprimatur for similar activities even if they involve “covered entities.”
The clash of intelligence prerogatives and health privacy always raises difficult issues. For now, I would just like to make one claim about the need for the government to be forthright about whether it is collecting health care data while profiling citizens. Such data gathering should not be what David Pozen calls a “deep secret;” that is, citizens should not be “in the dark about the fact that they are being kept in the dark.” Rather, we need to understand whether this very personal and important data is being commandeered to fight an “enemy within.”
There are broader principles for fair disclosure of the workings of the surveillance state. First, people are all too eager to sign up for new health “apps” and affinity groups without having any sense of how these activities and affiliations can affect their future. There is still a lazy public/private distinction affecting far too much of consumer conduct; I hear so-called internet experts wondering why anyone would worry about data stored by a private company because “they’re not the government.” Arkin & Priest have consistently shown that the public/private distinction is evanescent at best, a confounding development in social affairs that leaves libertarians sounding like communists.
Julie Cohen’s recent article in Social Research observes that there is a much larger political economy of surveillance that has accelerated both data gathering and profiling:
Devaluation of privacy is bound up with our political economy and with our public discourse about information policy in important ways that have little or nothing to do with official conduct. . . . Flows of data are facilitated by corporate data brokers like ChoicePoint, Experian, and Axciom. To help companies (and governments) make the most of the information they purchase, an industry devoted to “data mining” and “behavioral advertising” has arisen; firms in this industry compete with one another to develop more profitable methods of sorting and classifying individual consumers.
In the United States, a number of federal agencies have awarded multimillion dollar contracts to corporate data brokers to supply them with personal information about both citizens and foreign nationals. Privacy restrictions that limit the extent to which the government can itself collect personal information generally do not apply to such purchases at all. The government has deployed secrecy to great effect where these initiatives are concerned, with the result that we still understand too little about many of them. Legal regimes purporting to guarantee official transparency are in fact indeterminate on how much openness to require.
These processes let important decisionmakers in both the private and public sectors exist behind a “one way mirror.” Even if full transparency would compromise data gathering, citizens must know whether certain critical information (including health data) is being commandeered by the domestic intelligence apparatus.
Filed under: Health Law, Prescription Drugs, Privacy
On January 16, 2009, the Department of Health and Human Services (HHS) and CVS entered into a resolution agreement requiring CVS to pay a $2.25 million fine and implement a corrective action plan for “potential violations of the HIPAA [The Health Insurance Portability and Accountability Act of 1996] privacy rule.” Why? CVS had allegedly been placing prescription bottles and labels into dumpsters that were accessible to the public. The bottles/labels contained protected health information (PHI), which CVS was required to safeguard under federal law.
Although HHS appears to regard the settlement as a success, given its prominence on the HIPAA enforcement section of HHS’s website, it is nothing of the sort. The agreement provides that CVS “expressly den[ies] any violation of HIPAA or the Privacy Rule, and further den[ies] any wrongdoing,” while HHS does not concede that CVS is “in compliance with the Privacy Rule.” HHS did agree with itself, however, releasing an FAQ (accompanying the press release) stating that under its Privacy and Security Rules: “covered entities are not permitted to simply abandon PHI or dispose of it in dumpsters or other containers that are accessible by the public or other unauthorized persons.”
Why is this old news important? This week I had a prescription filled at my local CVS pharmacy in Livingston, New Jersey. While standing at the pharmacy I noticed that all of the filled prescriptions were stored directly behind the counter in plain view of any customer. Each prescription was inside a small bag to which a customer receipt was attached. The receipts in the front row of the storage bins were readable from the counter. The receipts contain protected health information (PHI) that is subject to the Privacy and Security Rules of HIPAA including:
1) Full name,
3) Telephone number,
4) Day and month of birth,
5) Drug name and dosage, and
HHS maintains the authority for civil enforcement of violations of the Privacy and Security Rules promulgated pursuant to HIPAA. So, why is it that CVS allows the public to view its customers’ PHI in violation of HIPAA even while still subject to the corrective action plan for its prior alleged violations? Well, I asked the pharmacist on duty. The pharmacist acknowledged that it was a problem that the PHI could be viewed from the counter. However, CVS was expecting to remodel and “hopefully” the shelf would be placed farther away to render the PHI unreadable. Upon requesting the contact information for CVS’s privacy officer, the pharmacist readily provided such information and stated that she would “appreciate” someone actually reporting the apparent violation.
HHS was recently provided with additional enforcement tools under the HITECH provisions of the American Recovery and Reinvestment Act of 2009. Unfortunately, it does not appear that HHS is serious about enforcing its own regulations or resolution agreements; nor, if the flagrantly violative placement of prescriptions is indicative of mindset, is CVS serious about HIPAA compliance.
The recent City of Ontario v. Quon decision has had a mixed reception among privacy advocates. Though many are disappointed that employees’ privacy rights have once again been narrowed, some have discerned helpful dicta in the case. However, I worry that, whatever the drift of thought among swing justices, economic imperatives and cultural shifts will mean a lot less privacy in the workplace of the future. Health care in particular offers a few interesting bellwethers.
As an opinion piece by Theresa Brown explains, maintaining proper staffing levels in hospitals is becoming increasingly difficult. Surveillance systems are offering one way to address the problem; work can be performed more intensively and efficiently as it is recorded and studied. But such monitoring has many troubling implications, according to Torin Monahan (in his excellent book, Surveillance in a Time of Insecurity):
The tracking of people [via Radio Frequency Identification Tags] represents a . . . mechanism of surveillance and social control in hospital settings. This includes the tagging of patients and hospital staff. . . . When administrators demand the tagging of nurses themselves, the level of surveillance can become oppressive. . . . [because nurses face] labor intensification, job insecurity, undesired scrutiny, and privacy loss. . . . To date, such efforts at top-down micromanagement of staff by means of RFID have met with resistance. . . . One desired feature for nurses and others is an ‘off’ switch on each RFID badge so that they can take breaks without subjecting themselves to remote tracking. (122)
Like the “nannycam” employed by many a wary parent, the nurse-cam may be seen as a way to protect the vulnerable. It may also increase the accuracy of evidence in malpractice cases. On the other hand, inserting a tireless electronic eye to monitor what is already an extremely stressful job may create many unintended consequences, or deter people from going into nursing altogether. Even advocates of pervasive surveillance recognize these difficulties.
The increasing pressure to monitor what happens inside hospitals reminds me of a recent article by Thomas Goetz in Wired (no link yet) on Google co-founder Sergey Brin’s quest to find a cure for Parkinson’s disease. As Goetz describes it, a new form of “high-speed science” depends on rapid accumulation of as much data as possible:
In Brin’s way of thinking, each of our lives is a potential contribution to scientific insight. We all go about our days, making choices, eating things, taking medications, doing things—generating what is inelegantly called data exhaust. . . . With contemporary computing power, that data can be tracked and analyzed. “Any experience that we have or drug that we may take, all those things are individual pieces of information. Individually, they’re worthless, they’re anecdotal. But taken together they can be very powerful.” In computer science, the process of mining such large data sets for useful associations is known as a market-basket analysis.