In her article, “Institutional Competence to Balance Privacy and Competing Values: The Forgotten Third Prong of HIPAA Preemption Analysis,” Barbara J. Evans takes on the well-settled belief — or “rumor,” as she calls it — that the HIPAA “Privacy Rule merely sets a floor of privacy protection that leaves states free to set stricter privacy standards.” (A draft of this article is available on SSRN, and it will be published in the University of California-Davis Law Review in 2013.) Although this general rule of HIPAA preemption is largely accurate, the article argues that it is wrong with respect to an enumerated “class of public health activities that Congress deemed to have high social value,” including “reporting of disease or injury, child abuse, birth, or death, public health surveillance, or public health investigation or intervention.”
Professor Evans begins with a textual argument, pointing out that HIPAA’s statutory text specifically includes a third prong, while HIPAA’s Privacy Rule, one of HIPAA’s key implementing regulations, collapses the statutory language into two prongs. The article maintains that in doing so, the “Privacy Rule ignored a clear statutory instruction to preempt state privacy law in a specific circumstance where Congress determined that individual privacy interests should give way to competing public interests.” In this specific public health context, she continues, “the HIPAA statute creates what might be called a ‘canopy,’ to shelter specific socially important data uses from more stringent privacy laws.” The author buttresses her analysis with legislative and regulatory history as well as a comparison with the structure of ERISA preemption provisions.
Noting that the statute speaks directly to this issue, Professor Evans maintains that the public health portion of the Privacy Rule is not entitled to Chevron or Skidmore deference where its interpretation is contrary to the statute and the agency did not offer a persuasive account to justify its interpretations. Rather, “the HIPAA statute preempts state privacy laws — even ones that are more stringent than the HIPAA privacy Rule — in situations where state laws would interfere with public health surveillance and investigations.”
Professor Evans attributes the inconsistency between the Privacy Rule and HIPAA to politically savvy rather than incompetent agency drafting. She asserts that HHS was aware that states were afraid that their privacy laws would be preempted, and thus the agency took a modest approach in the Privacy Rule, leaving unspoken the effect of the third prong on more stringent state laws in the limited context of enumerated public health activities. The statutory text, however, reflects Congress’s choice to “trust no institution other than itself” to “strike the balance between privacy and competing public interests.” There was a conscious choice not to permit a patchwork of varying state laws to frustrate the development of multi-state, interoperable databases needed for the enumerated public health activities.
This article breathes new life into statutory language that has been largely overlooked in the sixteen years since HIPAA’s enactment and is critical reading for anyone interested in public health surveillance, investigation, and privacy law. Professor Evans argues that facilitating access to large-scale, multi-state, interoperable databases of health-related data for tens or even hundreds of millions of people could speed “the detection of drug safety risks, unmask ineffective or wasteful treatments, and understand disparities in health outcomes among various populations subgroups,” while “unduly restrict[ing] access to data and biospecimens can very literally kill people.”
The article closes with an invitation to scholars for further “dialogue about [HIPAA']s forgotten preemption provision,” an invitation the health law community would be wise to accept. While she readily acknowledges that her conclusions are unorthodox, they will undoubtedly generate substantial and serious academic discussion.
Another important article for interoperability policymaking is Leslie P. Francis‘s article, “Skeletons in the Family Medical Closet: Access of Personal Representatives to Interoperable Medical Records,” which recently was posted to SSRN and was published in volume 4, issue 2 of the 2011 Saint Louis University Journal of Health Law & Policy.
With HIPAA’s Privacy Rule and the HITECH Act, federal law now grants patients the right to access their own medical records, including EHRs, with some limitations for certain records, such as psychotherapy notes. Importantly, personal representatives now generally enjoy the same rights of access to medical records that patients themselves hold, consistent with state law.
In addition, although HIPAA preempts state laws that are inconsistent with federal law, HIPAA generally (see Professor Evan’s important caveat above) does not preempt state laws that protect privacy more stringently than federal law. A state law is deemed more stringent when, for example, it provides individuals with greater access to their health information. As a result, “states may expand the individual right of access to health information, but may not contract it.”
The article points out an unintended consequence of such an expansion, however, given federal law on access: states that provide equal rights of access to patients and their representatives would be expanding personal representative access in step with any increased rights for patients.
But given the breadth of interoperable EHRs, patients may not want or expect their personal representatives to have access equal in scope to their own. Interoperable EHRs may very well contain records of medical care that are not directly relevant to the patients’ current care and that patients may not want their personal representatives to see. Professor Francis offers the example of an older patient being treated for a stroke who may not want her child to learn about her prior, unrelated pregnancy termination or psychiatric history – what Professor Francis calls “the metaphorical skeletons in her closet.”
The article thus explores the extent to which states may protect patient privacy and confidentiality in this legal framework by regulating personal representatives’ access to patient records. For example, although states generally either grant or deny personal representatives access to patient records, Professor Francis details how some have been more nuanced. For example, some permit patients to use advance directives to define the scope of access by personal representatives, such as on a need to know basis, while others restrict personal representative access to mental health or substance abuse treatment records.
Given the importance of respect for private autonomy, Professor Francis then makes four recommendations:
(1) Advance directive statutes should permit competent patients to designate the scope of their personal representatives’ access to interoperable medical records, ideally with respect to specific types of information, such as mental health, substance abuse, and reproductive history, and options such as all information, information only as needed to make care decisions, or no information.
(2) When patients do not have advance directives, there should be a presumption that personal representatives only have access to records needed for decision making about their care.
(3) Interoperable medical records should be designed to permit special management of sensitive medical information, such as mental health or substance abuse treatment records, to which personal representatives would have access only when necessary for emergency care.
(4) These recommendations generally should apply regardless if patients have mental illness or cognitive disabilities.
[Ed. Note: We are pleased to welcome Ana Liggio, Esq., to HRW. She is a health care and technology lawyer, in practice over 15 years. Prior to pursuing her LL.M. in Health Law here at Seton Hall Law, she was Director, Law Department, for Sony Electronics.]
The CMS website explains that meaningful use “means providers need to show they’re using certified EHR technology in ways that can be measured significantly in quality and in quantity. As CMS moves into finalizing meaningful use, Stage 2 requirements, I would like to introduce the concept of “meaningful experience” as an essential corollary to that of “meaningful use.”
Meaningful experience takes the idea a step further, representing ways to evaluate and encourage the merits of both proposed and existing criteria as seen from the value they bring to the provider and healthcare consumer stakeholders. While “meaningful use” focuses on ensuring that the financial beneficiaries of the Medicare and Medicaid EHR Incentive Program (the “Program”), the Certified Electronic Health Record Technology (“CEHRT”) industry, and the eligible healthcare providers (insofar as meaningful use bonus payments are at stake), continue to operate their EHR in a purposeful manner, there are additional, important stakeholders to consider. With billions of federal and state dollars earmarked for the Program and a strong interest in seeing EHR enjoy long-term success, taking a broader view of stakeholders and inserting more transparency into their experiences will better help the Program thrive. Meaningful Use, Stage 2, is the perfect time to look towards ensuring meaningful experience.
The Program is in full swing, with the Centers for Medicare and Medicaid Services (“CMS”) having released the NPRM on Meaningful Use, Stage 2, in the Federal Register on March 7, 2012.
The CMS blog explains: “Today’s proposed rules focus on using EHRs to improve health and health care while reducing the burden on physicians and hospitals where possible.” With early participation rates appearing strong, CMS continues to be cautious about keeping industry groups engaged and seeking out robust commentary through the NPRM. CMS clearly wants the healthcare industry to continue up the “EHR Escalator” without having anyone jump off for being frustrated or overwhelmed. To date, the strategy is working, as the CEHRT industry and healthcare providers appear to be embracing the Program. However, as Nicolas Terry points out in his article “Anticipating Stage Two: Assessing the Development of Meaningful Use and EMR Deployment,” ultimately, growth will have to be endogenous, fueled by innovation and consumer demand.
The comprehensive NPRM for Meaningful Use, Stage 2 demonstrates CMS’s commitment to considering the experiences and opinions of the interested industries. The ONC also asks data holders and non-data holders to take a pledge “to empower individuals to be partners in their health through health IT.” There is no doubt that the Program is making huge strides and continuing to chip away at the difficult issues of interoperability, access, privacy and security- and pushing the United States slowly but surely closer to a much higher healthcare IT standard similar to that enjoyed by many other developed nations. Moving into Stage 2, CMS seeks to enhance interoperability among different entities and further patient involvement by requiring increased access to their health information. That being said, the ONC’s National Coordinator for Health Information Technology, Farzeed Mostashari, explains that Stage 2 is meant to be more “evolutionary than revolutionary.” Importantly, Stage 2 also begins an initiative to align the requirements of the Program with other complementary, ongoing healthcare reform initiatives involving national quality and the development of ACOs.
Reading through the NPRM, I saw a few areas that CMS could focus on to help build a self-sustaining system. First, the initial iteration of the Program was clearly written with an eye toward maximizing meaningful use for family care and general practitioners and not towards other types of practices like pediatrics, various specialists, and physicians whose practices do not entail much face-to-face patient interaction (e.g., radiologists); they should be given further attention. Second, while CMS provides somewhat of a return on investment analysis in the NPRM, it apologetically declares it too early in the Program to be able to provide meaningful data; CMS could use the attestation process to collect the necessary data. Finally, healthcare consumers — those taxpayers who fund this program — should be actively considered and made aware of the enhancements and improvements that comprise the Program, which will be offering them a more efficient, accessible, safe and evidence-based healthcare experience; a “meaningful user” designation for CEHRT users who meet certain criteria could be developed to help providers publicize their investment in the Program and the attendant benefits it will bring to their patients. Meaningful Use, Stage 2, is the perfect time to address these issues and move the Program forward in such a way that will make it self-sustaining for the long-term, not because of incentive funding, but because meaningful use is providing a meaningful experience to the various EHR stakeholders.
As with early versions of the Medicare Shared Savings Plan and healthcare reform generally, the focus of the Program’s meaningful use objectives and criteria, initially at least, is on general practitioners and how they can use EHR to advance the overall wellbeing of the population. This goal is laudable, of course, but the population of eligible providers extends well beyond PCPs. Certain objectives and measures allow providers to claim an exclusion if they do not apply to their practice, thereby not penalizing those types of practitioners for whom compliance would be unnecessary and inefficient. However, focus on these different categories of practices could allow for alternative objectives and measures to be found. If one were to consider meaningful experience in addition to meaningful use, the attestation would ask EPs who are claiming exemptions to use and, possibly attest to, alternative meaningful use standards that are applicable to their practices. For instance, there is a proposed measure for recording 80% of an EP’s patients’ height, weight and blood pressure as structured data. There is an available exclusion, however, for EPs who do not believe that recording such vital signs is “relevant to their scope of practice.” An EP who claims the exclusion simply gets a pass on this field during the attestation process. Alternatively, a required (or even optional) free-form response area could be provided in the attestation each time an EP claims exclusions. As time goes on, data would be collected that would allow CMS to customize attestations, and CEHRT requirements as well, to different specialties so that meaningful use translates into meaningful experience for those whose practices do not fit the general practitioner mold on which the first versions of Meaningful Use were based. Certainly the technology will allow, rather easily, for modifications where appropriate if the effort is set forth to ask those in the field what would be meaningful to their practices and to encourage them to use the EHR tools available to them in such ways.
Because the proposed rule is anticipated to have an annual effect of over $100 million on the economy, a Regulatory Impact Analysis (RIA) that measures costs and benefits must be performed. While CMS does a fair job of estimating costs to providers of implementing EHR and costs to taxpayers of funding the Program, it has not done much to quantify benefits gleaned. The NPRM qualifies its analysis by pointing to various unknowns and a lack of “new data regarding rates of adoption or costs of implementation.” Without specific data, it estimates various “high and low” scenarios for different practice settings and ultimately concludes, “there are many positive effects of adopting EHR” as well as various benefits for society. While I tend to agree with this conclusion as general matter of conjecture, why not collect the actual data during the attestation process? Ask the EHR attesters how much their systems cost initially and to maintain. Ask the EHR attesters where the systems are adding value to their practices and for their patients. Yes, it’s a leap of faith to ask these questions because the answers may not offer a perfect picture, but they will offer an honest representation of the current state that can be addressed going forward. It is only fair to give the stakeholders an honest assessment and it would not be difficult to collect the data. While EHR is all about collecting healthcare data and crunching numbers to see trends and identify areas where improvements can be made, let’s use those same principals here to perform the same analysis with regard to the EHR technology.
Finally, to assist providers who have made the investment and will continue to feed important data to the various government health databases, CMS could offer some type of certification that the providers could use in marketing their practices. For all the good that EHR is meant to do in terms of patient safety, efficiency of care and meaningful communication between patients and their providers, let’s devise a way to inform patients about which providers are running state-of-the-art practices. Providers who attest to meeting the meaningful use requirements could be offered the option of using a certified meaningful user designation and displaying a certain logo, all of which would indicate to the public that such providers are using the latest healthcare technology. For healthcare consumers who consider it important to have the ability to access their records or have their prescriptions transmitted electronically, for example, this designation would help lead them to the types of practices they desire. Assuming this is the future of healthcare and what the American public desires or will come to desire of its healthcare providers, such a tool would be useful to the providers and healthcare consumers alike.
At the end of the day, the success of the EHR program, and the value it will have brought to the US healthcare system, will be measured by the experience of the healthcare providers and consumers. In the best-case scenario, there will be data showing that the EHR Program has achieved the desired results with a minimum burden placed upon providers. But what will actually entice providers to continue to make “meaningful use” of the systems will be when meaningful use results in an experience they deem worthwhile for themselves and their healthcare consumers and when their patients agree. As such, CMS should use the attestation process and resultant data to continuously measure the actual costs and benefits and make adjustments as needed. During the attestation process, it could ask providers to suggest alternative meaningful uses for EHR when the existing measures do not apply and to volunteer cost data and their impressions of meaningfulness. Finally, CMS could give providers a way to publicize their commitment to using technology to enhance patient care. Some time and effort devoted to meaningful experience will allow meaningful use to translate into a self-sustaining, successful program.[Ed. note: this piece originally ran on April 17, 2012, but was lost in the vagaries of cyberspace to a blog mishap. It's just too good to lose and so here enjoys a repeat performance]
Since the data breach notification regulations by HHS went into effect in September 2009, 385 incidents affecting 500 or more individuals have been reported to HHS, according to its website. A total of 19 million individuals have been affected by a large data breach since 2009. The regulations require a covered entity that discovers a reportable breach affecting 500 individuals or more to report the incident to the HHS Office of Civil Rights immediately. After an investigation, HHS publicly posts information about the reported incident on its website on what has become known as the “Wall of Shame.” Of the 385 reported incidents, there are six separate incidents each affecting a million individuals or more. In its 2011 annual report to Congress, HHS reported that in 2009 covered entities notified approximately 2.4 million individuals affected by a breach and 5.4 million individuals the following year. This number grew in 2011 and it will likely continue to grow in 2012. To date, the largest breach took place in October 2011 at Tricare, the health insurer of American military personnel, which affected 4,901,432 individuals after storage tapes containing protected health information (PHI) were stolen from a vehicle. These numbers are staggering, but fortunately more can be done and should be done to prevent data breaches.
Data breaches can cause great harm to the affected individuals, providers and institutions. Individuals may experience embarrassment and harassment because sensitive health information was released. Individuals are vulnerable to identity theft and financial fraud if personal information such as social security numbers were accessed. More frequently, institutions are offering credit monitoring services to affected individuals to monitor for potential fraud. Similarly, data breaches carry a very high cost for institutions that will have to spend great sums to investigate and report a breach to HHS, the media and the affected individuals. An institution or provider’s reputation can also be harmed through negative publicity and the loss of consumers. More institutions are hiring public relations teams after a breach to minimize the amount of fallout and negative publicity. The threat of litigation and class action lawsuits following a breach is also present and very real. Stanford Hospital, Tricare, and Sutter Health are all facing million and billion dollar class action lawsuits for their 2011 data breaches.
The bad news is that data breaches are impossible to predict and it is impossible to protect against every type of possible breach. Unfortunately, even the strongest policies, precautions and security measures cannot protect an entity from a hacker, thief or an employee or business associate’s honest mistake. As more providers and institutions adopt electronic health record systems and digitize their records, data breaches will continue to occur and large breaches will be spotlighted by the media. Pursuant to the regulations, a covered entity must alert a prominent media outlet if a reported breach affects more than 500 people of that state. Based on the events of last year alone, it is clear that the media loves to report on data breaches and will continue to do so. Hopefully this public exposure will serve to increase accountability to the public rather than instill fear in the public and hurt consumer confidence in the EHR movement.
The good news is that more can be done by providers and institutions to prevent harmful and costly data breaches. Data security and patient privacy should be the focus of the industry in the upcoming years because it is just as important as meaningful use certification. The benefits flowing from the Medicare incentive payments that an institution may receive under the Affordable Care Act can be canceled out in the event of a large and debilitating data breach. It would be wise for covered entities to focus on preventing data breaches as much as achieving meaningful use.
There is no easy solution to preventing breaches, but encryption is one surefire way an entity can better protect itself from a costly breach. As entities become more familiar with EHR systems and recognize the risks involved in storing and transferring PHI data, implementing encryption technology should become a top priority for each entity.
Encryption of PHI is a major step a provider or institution can take to secure its sensitive patient data. Encryption is the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key. According to a Guidance from HHS, if an entity encrypts its data in accordance with the National Institute of Standards and Technology standards for encryption, then any breach of the encrypted data falls within a safe harbor and does not have to be reported. This is an incredibly important safe harbor that could save an entity a lot of money. It is shocking that more entities, especially those with the means and resources to install a qualifying encryption system, do not utilize encryption technology on any of their electronic devices, especially portable devices.
Of the 385 reported breach incidents, thirty-nine percent involved a lost or stolen laptop or other portable media device containing unencrypted PHI. A report recently released by Redspin, an IT security firm, states that data breaches stemming from employees losing unencrypted devices spiked 525 percent in the last year alone. This statistic confirms that devices, including laptops, tablets and smartphones, pose a very high risk for a data breach. Redspin reported that eighty-one percent of healthcare organizations now use smartphones, iPads, and other tablets, but forty-nine percent of respondents in a recent healthcare IT poll by the Ponemon Institute said that nothing was being done to protect the data on those devices. At the very least, these reports and the statistics on HHS’s “Wall of Shame” should encourage entities to encrypt their portable electronic devices that contain sensitive PHI.
There are of course costs associated with adopting encryption technology in an EHR system. There are costs to install the system and maintain it with the help of an IT expert. Encryption of information can also slow down the processes used in sharing information. After all, one of the main goals of an EHR system is to make it easier for providers to share health information about their patients. An entity should work with an IT expert to determine what information should be encrypted in order to maximize the efficiencies of an EHR system. Despite the costs, the money and resources spent implementing encryption technology can be well worth it and are a smart investment for any entity with an EHR system. In a study published in 2011, the Ponemon Institute found that the cost of a data breach was $214 per compromised record and the average cost of a breach is $7.2 million. In light of the large data breaches that have been reported, it is clear that the costs of a breach can be much higher than the costs to implement encryption technology.
Under the HITECH Act and HHS’s interim final rule, encryption of health information is not mandatory. It remains to be seen whether HHS will impose a mandatory encryption policy on all devices or, at the very least, all portable devices capable of storing or transferring PHI, when it releases the final version of the data breach notification regulations sometime this year. The health care industry’s lack of encryption for patient information has drawn attention on Capitol Hill. At a November 2011 hearing before the Senate Judiciary Committee’s panel on Privacy, Technology and Law, Deven McGraw of the Center for Democracy and Technology testified that “we know from the statistics on breaches that have occurred since the notification provisions went into effect in 2009 that the healthcare industry appears to be rarely encrypting data.” At the hearing, Senator Tom Coburn, a physician himself, and Senator Al Franken, the chair of the panel, both voiced their concern over patient privacy protection and the current regulatory scheme. Senator Franken has said that he is contemplating legislation to encourage encryption by providers, although no action has been taken.
In the interim, it is reasonably clear that most, if not all, entities can benefit from implementing encryption technology when considering the costs and headaches associated with a data breach. When encryption is done properly, it has the potential of saving an entity a large sum of money, perhaps millions of dollars, in costs and fines — and that should be reason enough for entities to start taking this step in EHR technology.
In an effort to keep up with advancing technology, the Food and Drug Administration (FDA) has proposed new regulations to monitor medical smartphone applications (apps). The draft proposal states that any mobile app that is intended for use in performing a medical device function meets the definition of a medical device under the Federal Food, Drug, and Cosmetic Act. Specifically, these mobile medical apps must be either used as an accessory to a regulated medical device, or transform a mobile platform into a regulated medical device.
To further clarify what apps will be regulated, the document notes that a mobile app is a device “when the intended use of a mobile app is for the diagnosis of disease or other conditions, or the cure, mitigation, treatment, or prevention of disease, or is intended to affect the structure or any function of the body of man.” The guidance document explains how the intended use of mobile apps can be shown by labeling claims, advertising materials, or oral or written statements by manufacturers or their representatives.
The goal of the regulations is to protect patient safety, though to date there have been no adverse events reported to the FDA. The proposal seems to be forward looking in creating a framework for mobile app manufacturers. According to the Associated Press, there are already more than 17,000 medical applications currently available.
Physicians can use mobile phones to calculate prescription dosages, review disease treatment guidelines, and explain diagnoses and procedures to patients. The FDA expects that by 2015, 500 million smartphone users will rely on health care apps.
Two medical apps have already received FDA approval for use by physicians. The first is a prenatal care app called AirStrip OB. Cleared in 2009, the app allows obstetricians to use their phones to remotely access real-time data for mothers and babies. The second app, approved earlier this year, is Mobile MIM. This app allows hospitals and doctors offices to send images to physicians’ mobile devices. The FDA noted that the software should not be used to replace radiology workstations as the primary way to view medical images, but is useful when a physician has limited access.
Opinions from within the industry vary on the new guidelines. Some feel that the regulation is both necessary and welcome. By regulating the medical app industry, the FDA is offering market players clear guidelines for continued development. Others argue that the regulations may be too far reaching. For example, medical apps to calculate prescription dosages for patients are not new and are based on accepted formulas. Smartphone apps that achieve the same goal increase efficiency and do not put patients at risk, and therefore do not merit differential treatment.
The proposed regulations raise two main concerns for patients and physicians. The first is a privacy concern, similar to the drawbacks considered for other forms of Electronic Health Records. Transferring data between hospital systems and physician smartphones will increase confidentiality and security concerns. Once patient data is accessed on a smartphone, privacy may be easily breached should the phone be used by another person, lost or stolen. The second concern is that these proposed rules could increase the purchase price for medical apps. App developers will likely have increased costs for filing applications and seeking legal counsel, and those costs will be passed to end users.
The draft proposal is currently in an open comment period, and the FDA will amend the regulations after the comment period closes.
As ACA implementation lumbers ahead, and challenges to it slouch toward the Supremes, the U.S. health care system’s arbitrary old ways continue to mystify and frustrate. Consider this story on one person’s quest to obtain insurance:
Most employees assume that if they lose their job and the health coverage that comes along with it, they’ll be able to purchase insurance somewhere. . . .My husband, teenage daughter and I were all active and healthy, and I naïvely thought getting health insurance would be simple. . . .
Then the first letter arrived — denied. . . .What were these pre-existing conditions that put us into high-risk categories? For me, it was a corn on my toe for which my podiatrist had recommended an in-office procedure. My daughter was denied because she takes regular medication for a common teenage issue. My husband was denied because his ophthalmologist had identified a slow-growing cataract. Basically, if there is any possible procedure in your future, insurers will deny you. . . .
As I filled out more applications, I discovered a critical error in my strategy. The first question was “Have you ever been denied health insurance”? Now my answer was yes, giving the new companies reason to be wary of my application. I learned too late that the best tactic is to apply simultaneously to as many companies as possible, so that you don’t have to admit to a denial.
As was recently reported, “50 to 129 million (19 to 50 percent of) non-elderly Americans have some type of pre-existing health condition.” The “health care market” is sending a strong signal: don’t step out of the system if you have any continuing need for even minor care.
But what’s more worrisome are the types of information circulating about you that you aren’t even aware of. Consider this story from Businessweek about the profiling of insurance applicants by third-party intermediaries:
Most consumers and even many insurance agents are unaware that Humana, UnitedHealth Group , Aetna (AET), Blue Cross plans, and other insurance giants have ready access to applicants’ prescription histories. These online reports, available in seconds from a pair of little-known intermediary companies at a cost of only about $15 per search, typically include voluminous information going back five years on dosage, refills, and possible medical conditions. The reports also provide a numerical score predicting what a person may cost an insurer in the future. . . .
[A] 57-year-old safety consultant in the oil and gas industry, says he tried to explain that the medications weren’t for serious ailments. The blood-pressure prescription related to a minor problem his wife, Paula, had with swelling of her ankles. The antidepressant was prescribed to help her sleep—a common “off-label” treatment doctors advise for some menopausal women. But drugs for depression and other mental health conditions are often red flags to insurers. Despite his efforts to reassure Humana, the phone interview with the company representative “just went south,” Walter recounts. He and his wife remain uninsured [as of 2008].
Health-related data from a wild west of unregulated intermediaries may spread to employers and other decisionmakers, just as credit scores have migrated from the bank context to influencing insurance pricing, and credit histories now influence employers. Sharona Hoffman has observed that “It is not uncommon for employers to obtain applicants’ and employees’ medical records. According to one source, every year, over ten million authorizations for release of medical information are signed by workers prior to the commencement of employment.” She has predicted disturbing possibilities arising out of that access to data:
Existing laws, including the ADA, GINA, HIPAA, and their state counterparts, provide important assurances to applicants and employees but are insufficient to guarantee that they will suffer no ill consequences as a result of EHR disclosure to employers. Employees may be especially concerned in times of recession, knowing that financial pressures make workers with health problems particularly unattractive to employers. Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers. In addition, in times of financial difficulty, limited resources may be available to implement technology and policies that will secure EHR confidentiality.
Secondary uses of health data could be a very lucrative niche for profilers of the future.
Given these possibilities, individuals should at least have the right to access and correct the health data that intermediaries have compiled about them. The FTC recognized this right, and “forced the [insurance] industry to begin disclosing the use of prescription information under . . . the Fair Credit Reporting Act. . . . Copies of prescription reports are supposed to be available to consumers at no charge under federal law.” This is a small step forward. But if the “scores” assessing individual risk are compiled according to proprietary algorithms, the consumer may still feel “in the dark,” unable to adequately influence the presentation of herself to the insurer.
As Esther Dyson has stated in another context, mysterious data flows can jeopardize individual autonomy:
The comforting thing about the kind of data that Facebook primarily deals with is that it’s public. If your friends and other people can see it, so can you.
More troubling is the data you don’t even know about – the kind of data about your online activities collected by ad networks and shared with advertisers and other marketers, and sometimes correlated with offline data from other vendors. By and large, that’s information you can’t see – what you clicked on, what you searched for, which pages you came from and went to – and neither can your friends, for the most part. But that information is sold and traded, manipulated with algorithms to classify you and to determine what ads you see, what e-mails you receive, and often what offers are made to you. Of course, some of that information could go astray.
Online advertisers already slice and dice population segments (and distribute opportunities & exposure to ads) via marketing discrimination. Will the “e-health revolution” bring their methods out of cyberspace, and into the deadly serious business of offering employment and insurance based on estimates of health status that applicants can’t understand or challenge?