As health providers and patients use more technology, new ways of addressing health care disparities are emerging. In 2009 Congress passed important federal legislation that addresses the digital infrastructure for medical care, the Health Information Technology for Economic and Clinical Health Act (HITECH). Recently in 2010, Congress passed the Patient Protection and Affordable Care Act (PPACA), which reduced barriers to health information technology (HIT). In line with the technological spirit of both laws, this blog post focuses on online social networking as a digital health care solution for elderly Hispanics who face disparities in the care that they receive.
Hispanics in the United States are twice as likely as non-Hispanics to lack a regular primary care physician (PCP). Those Hispanics that do not have a PCP suffer because they tend to experience disparities in health care when compared to other patient populations. Real-time health care-focused social networking sites (SNSs) or applications within an established SNS can provide beneficial health care solutions for vulnerable patient populations such as elderly Hispanics. One-way in which a SNS can benefit elderly Hispanics and reduce their health care disparities is by supporting the Patient-Centered Medical Home (PCMH) with digital applications. In fact, if real-time social networking transpired among 1) patients, 2) patients and their health care providers, and 3) between health care providers, elderly Hispanics could potentially receive better care.
As the role of HIT increases, it has led to a growing interest in understanding the potential role of HIT in “addressing healthcare disparities among racial and ethnic minority populations.” In order to properly evaluate the potential of HIT to address health care disparities, “adoption and utilization barriers must be understood.” Because this blog post is concerned with social networking sites, the discussion here will focus on social media and its emergence as “a potent resource among healthcare consumers.” Some studies have shown that “social media utilization patterns by race suggest potential opportunities to help address healthcare disparities via” increased communication between patients and physicians.
Social media has begun to infiltrate the health care system in several ways. First, entrepreneurs who understand “health care trends and consumer demands are leading creative business startups that are developing health-oriented social networks, health content aggregators, medical and wellness applications, and tools to enable health-related vertical searches (searches focused on a specific content area).” There are a growing number of condition-specific communities such as patientslikeme, QuitNet, and CureTogether.
Although there are many benefits to HIT, there are also barriers that prevent physicians from adopting HIT. One major benefit stemming from HIT is that it can lead to positive communication in “which providers share thoughts, opinions, and information by speech, in writing, or through peer professional or social networks [which have] been shown to be associated with provider health IT adoption.” One major issue is the inability of electronic health records (EHRs) and HIT systems to communicate with each other, the impact of HIT on clinical workflows, and the absence of technical assistance for office staff and physicians. Additional barriers from the patient perspective will exist if a patient does not perceive a benefit to be gained from using technology; in fact, without a perceived benefit they are highly unlikely to use it. There is also the perception that patients might be too busy to incorporate HIT into their busy everyday lives. Also there may be “poor computer knowledge, literacy, and skills ” prevalent among target populations which could benefit from HIT. Additionally, “lack of cultural relevance as well as privacy and trust concerns all have been reported as barriers to the use of [consumer health informatics] tools and applications.” In framing technological health care solutions for a minority population such as Hispanics, it is important to consider cultural issues in any implementation because cultural issues could deter use by a given patient population.
There are several proposed ways in which HIT can reduce health care disparities. For example, if clear and accurate patient information were to be presented to a physician in an electronic setting it could lead to the promotion of high-quality personalized care and reducing select health care disparities. Additionally, EHRs could provide the physicians that serve elderly Hispanics more accurate information and help them make better treatment decisions. The largest benefit would be the ability to connect “physicians with other [physicians or patients]…[and also] tools such as e-mail, e-consultation, e-prescribing, [which could] enable providers to connect with other healthcare professionals” in a more fluid manner.
It is important that the above mentioned benefits are implemented in communities where there are underserved Hispanics or other vulnerable patient populations. It is urgent that those with the highest health care disparities benefit from such technologies because historically their needs have not been met. Scholars have already noted that “telemedicine, remote monitors and sensors, patient e-mail, and increasingly the Internet and social media, connect providers and healthcare systems to patients and caregivers.” The idea is that greater communication can reduce health care disparities. When dealing with a historically vulnerable patient population such as elderly Hispanics who face various types of social issues, I believe that easier access to their health providers can make a big difference in improving their health care outcomes.
An HIT tool that connects providers with patients could reduce health care disparities by “enabling increased monitoring of important clinical parameters” in a way that is not currently taken advantage of for minority patients. Increased communication will allow physicians to stay in contact and monitor their sickliest patients through enhanced doctor-patient communication. As technology and health care merge, it is vital that vulnerable patient populations, such as elderly Hispanics, are identified so that they can be included in the technological healthcare solutions being proposed.
Felipe De Los Santos is in his last year at Seton Hall University, School of Law. Felipe is set to graduate from Seton Hall in May 2013 with a Health Law Concentration. He graduated from Connecticut College in 2007 with a B.A. in English and Economics. From 2007-2009, he worked in finance as a Consultant for ALaS consulting between New York and Delaware. During his first year of law school Felipe interned with the New York State Majority Leader (2009-10).
Presently Felipe works as a Project Manager for a New York State health care company in the Community Based Programs division. Felipe manages and develops projects that focus on chronically ill elderly patients in New York City. As part of his responsibilities Felipe develops marketing strategies and action plans to support targeted patient populations who can benefit from managed long-term care. Currently, Felipe is involved in launching a Medicare/Medicaid Advantage Plan. Felipe’s work with vulnerable patient populations and interests in technology, have made the crossroad of technology and healthcare an interest that he has written about in law school. Felipe’s health reform interests include improving health care access and outcomes for vulnerable patient populations.
Felipe may be reached at email@example.com
Health information law is a very exciting field. Lawyers, doctors, and start-ups are re-thinking health care as an information industry. I’ll be speaking on privacy and fair data practices at an upcoming conference. The relationships between privacy, “big data,” and trade secrecy will bear a great deal of attention in coming years.
Software-based automation has raised living standards dramatically. It makes factories more efficient, renders vast amounts of information accessible, and daily improves quality of life in barely noticed ways. To realize these types of advances in health care, government and NGOs have begun to catalyze better data collection, retention, and analysis. Life sciences companies need to report more data on drugs and devices. Hospitals and doctors are incentivized to use electronic health records via stimulus funding and rulemaking based on the HITECH Act’s meaningful use and certification requirements.
How will traditional intellectual property laws interact with these initiatives? Will the increasing need for cooperation and sharing of information alter the landscape of trade secrecy and other IP protections that have often siloed health data? Will providers find alternative funding sources for the collection, retention, and analysis of data, as some traditional IP protections appear increasingly outdated in a world of “big data” and market-driven transparency?
Medical privacy law has focused on assuring the privacy, security, and accuracy of medical data. The post-ACA landscape will include more concern about balancing privacy, innovation, access, and cost-control. Advanced information technology has raised a number of new questions. Beyond HIPAA and HITECH regulation, consumer protection law plays an important role in these fields. (For example, the FTC recently required firms that “score” the health status of individuals based on their pharmacy records to disclose these records to scored individuals.)
Patients are opting to personalize their health records with the help of cloud computing firms; what law governs this digital migration? There is increasing concern about the role of “incidental findings” in medical research and practice; how will regulators and professional groups address them? When employers demand access to employee health records, in what ways can they use them to profile the employee?
We also need to examine the legal aspects of data portability, integrity, and accuracy. When two health records conflict, which takes priority? What is “meaningful use” of an electronic health records system, and how will regulators and vendors assure interoperability between systems? The course will also cover innovators’ efforts to protect their health data systems using contracts, technology, trade secrecy, patents, and copyright, and “improvers’” efforts to circumvent those legal and technological barriers to openness.
Finally, what are pharmaceutical companies’ past and present strategies regarding the disclosure of their research, including non-publication of adverse results and ghostwriting of positive outcomes? Will a “reproducible research” movement, popular in the hard sciences, reach pharmaceutical firms? Insurer data will also be a target of reformers (including trade-secret protection of prices paid to hospitals, conflicts over the interpretation of disclosure requirements in the ACA, and state regulation of insurer-run doctor-rating sites). Quality improvement and pilot programs will need good provider and insurer data–how we will ensure they have them?
[Ed. Note: We are pleased to welcome Ana Liggio, Esq., to HRW. She is a health care and technology lawyer, in practice over 15 years. Prior to pursuing her LL.M. in Health Law here at Seton Hall Law, she was Director, Law Department, for Sony Electronics.]
The CMS website explains that meaningful use “means providers need to show they’re using certified EHR technology in ways that can be measured significantly in quality and in quantity. As CMS moves into finalizing meaningful use, Stage 2 requirements, I would like to introduce the concept of “meaningful experience” as an essential corollary to that of “meaningful use.”
Meaningful experience takes the idea a step further, representing ways to evaluate and encourage the merits of both proposed and existing criteria as seen from the value they bring to the provider and healthcare consumer stakeholders. While “meaningful use” focuses on ensuring that the financial beneficiaries of the Medicare and Medicaid EHR Incentive Program (the “Program”), the Certified Electronic Health Record Technology (“CEHRT”) industry, and the eligible healthcare providers (insofar as meaningful use bonus payments are at stake), continue to operate their EHR in a purposeful manner, there are additional, important stakeholders to consider. With billions of federal and state dollars earmarked for the Program and a strong interest in seeing EHR enjoy long-term success, taking a broader view of stakeholders and inserting more transparency into their experiences will better help the Program thrive. Meaningful Use, Stage 2, is the perfect time to look towards ensuring meaningful experience.
The Program is in full swing, with the Centers for Medicare and Medicaid Services (“CMS”) having released the NPRM on Meaningful Use, Stage 2, in the Federal Register on March 7, 2012.
The CMS blog explains: “Today’s proposed rules focus on using EHRs to improve health and health care while reducing the burden on physicians and hospitals where possible.” With early participation rates appearing strong, CMS continues to be cautious about keeping industry groups engaged and seeking out robust commentary through the NPRM. CMS clearly wants the healthcare industry to continue up the “EHR Escalator” without having anyone jump off for being frustrated or overwhelmed. To date, the strategy is working, as the CEHRT industry and healthcare providers appear to be embracing the Program. However, as Nicolas Terry points out in his article “Anticipating Stage Two: Assessing the Development of Meaningful Use and EMR Deployment,” ultimately, growth will have to be endogenous, fueled by innovation and consumer demand.
The comprehensive NPRM for Meaningful Use, Stage 2 demonstrates CMS’s commitment to considering the experiences and opinions of the interested industries. The ONC also asks data holders and non-data holders to take a pledge “to empower individuals to be partners in their health through health IT.” There is no doubt that the Program is making huge strides and continuing to chip away at the difficult issues of interoperability, access, privacy and security- and pushing the United States slowly but surely closer to a much higher healthcare IT standard similar to that enjoyed by many other developed nations. Moving into Stage 2, CMS seeks to enhance interoperability among different entities and further patient involvement by requiring increased access to their health information. That being said, the ONC’s National Coordinator for Health Information Technology, Farzeed Mostashari, explains that Stage 2 is meant to be more “evolutionary than revolutionary.” Importantly, Stage 2 also begins an initiative to align the requirements of the Program with other complementary, ongoing healthcare reform initiatives involving national quality and the development of ACOs.
Reading through the NPRM, I saw a few areas that CMS could focus on to help build a self-sustaining system. First, the initial iteration of the Program was clearly written with an eye toward maximizing meaningful use for family care and general practitioners and not towards other types of practices like pediatrics, various specialists, and physicians whose practices do not entail much face-to-face patient interaction (e.g., radiologists); they should be given further attention. Second, while CMS provides somewhat of a return on investment analysis in the NPRM, it apologetically declares it too early in the Program to be able to provide meaningful data; CMS could use the attestation process to collect the necessary data. Finally, healthcare consumers – those taxpayers who fund this program — should be actively considered and made aware of the enhancements and improvements that comprise the Program, which will be offering them a more efficient, accessible, safe and evidence-based healthcare experience; a “meaningful user” designation for CEHRT users who meet certain criteria could be developed to help providers publicize their investment in the Program and the attendant benefits it will bring to their patients. Meaningful Use, Stage 2, is the perfect time to address these issues and move the Program forward in such a way that will make it self-sustaining for the long-term, not because of incentive funding, but because meaningful use is providing a meaningful experience to the various EHR stakeholders.
As with early versions of the Medicare Shared Savings Plan and healthcare reform generally, the focus of the Program’s meaningful use objectives and criteria, initially at least, is on general practitioners and how they can use EHR to advance the overall wellbeing of the population. This goal is laudable, of course, but the population of eligible providers extends well beyond PCPs. Certain objectives and measures allow providers to claim an exclusion if they do not apply to their practice, thereby not penalizing those types of practitioners for whom compliance would be unnecessary and inefficient. However, focus on these different categories of practices could allow for alternative objectives and measures to be found. If one were to consider meaningful experience in addition to meaningful use, the attestation would ask EPs who are claiming exemptions to use and, possibly attest to, alternative meaningful use standards that are applicable to their practices. For instance, there is a proposed measure for recording 80% of an EP’s patients’ height, weight and blood pressure as structured data. There is an available exclusion, however, for EPs who do not believe that recording such vital signs is “relevant to their scope of practice.” An EP who claims the exclusion simply gets a pass on this field during the attestation process. Alternatively, a required (or even optional) free-form response area could be provided in the attestation each time an EP claims exclusions. As time goes on, data would be collected that would allow CMS to customize attestations, and CEHRT requirements as well, to different specialties so that meaningful use translates into meaningful experience for those whose practices do not fit the general practitioner mold on which the first versions of Meaningful Use were based. Certainly the technology will allow, rather easily, for modifications where appropriate if the effort is set forth to ask those in the field what would be meaningful to their practices and to encourage them to use the EHR tools available to them in such ways.
Because the proposed rule is anticipated to have an annual effect of over $100 million on the economy, a Regulatory Impact Analysis (RIA) that measures costs and benefits must be performed. While CMS does a fair job of estimating costs to providers of implementing EHR and costs to taxpayers of funding the Program, it has not done much to quantify benefits gleaned. The NPRM qualifies its analysis by pointing to various unknowns and a lack of “new data regarding rates of adoption or costs of implementation.” Without specific data, it estimates various “high and low” scenarios for different practice settings and ultimately concludes, “there are many positive effects of adopting EHR” as well as various benefits for society. While I tend to agree with this conclusion as general matter of conjecture, why not collect the actual data during the attestation process? Ask the EHR attesters how much their systems cost initially and to maintain. Ask the EHR attesters where the systems are adding value to their practices and for their patients. Yes, it’s a leap of faith to ask these questions because the answers may not offer a perfect picture, but they will offer an honest representation of the current state that can be addressed going forward. It is only fair to give the stakeholders an honest assessment and it would not be difficult to collect the data. While EHR is all about collecting healthcare data and crunching numbers to see trends and identify areas where improvements can be made, let’s use those same principals here to perform the same analysis with regard to the EHR technology.
Finally, to assist providers who have made the investment and will continue to feed important data to the various government health databases, CMS could offer some type of certification that the providers could use in marketing their practices. For all the good that EHR is meant to do in terms of patient safety, efficiency of care and meaningful communication between patients and their providers, let’s devise a way to inform patients about which providers are running state-of-the-art practices. Providers who attest to meeting the meaningful use requirements could be offered the option of using a certified meaningful user designation and displaying a certain logo, all of which would indicate to the public that such providers are using the latest healthcare technology. For healthcare consumers who consider it important to have the ability to access their records or have their prescriptions transmitted electronically, for example, this designation would help lead them to the types of practices they desire. Assuming this is the future of healthcare and what the American public desires or will come to desire of its healthcare providers, such a tool would be useful to the providers and healthcare consumers alike.
At the end of the day, the success of the EHR program, and the value it will have brought to the US healthcare system, will be measured by the experience of the healthcare providers and consumers. In the best-case scenario, there will be data showing that the EHR Program has achieved the desired results with a minimum burden placed upon providers. But what will actually entice providers to continue to make “meaningful use” of the systems will be when meaningful use results in an experience they deem worthwhile for themselves and their healthcare consumers and when their patients agree. As such, CMS should use the attestation process and resultant data to continuously measure the actual costs and benefits and make adjustments as needed. During the attestation process, it could ask providers to suggest alternative meaningful uses for EHR when the existing measures do not apply and to volunteer cost data and their impressions of meaningfulness. Finally, CMS could give providers a way to publicize their commitment to using technology to enhance patient care. Some time and effort devoted to meaningful experience will allow meaningful use to translate into a self-sustaining, successful program.
[Ed. note: this piece originally ran on April 17, 2012, but was lost in the vagaries of cyberspace to a blog mishap. It's just too good to lose and so here enjoys a repeat performance]
Hospital readmissions for chronic diseases such as asthma, congestive heart failure and diabetes are said to have been estimated to account for over 80% of hospital inpatient stays. In an effort to reduce these admits and consequently lower healthcare costs, AT&T and Intuitive Health have collaborated to pilot a home-based remote patient monitoring solution which would allow patients to spend more time at home and engage in their own care rather than with healthcare providers at medical facilities. Through wireless connectivity provided for by AT&T, the system works to send data from the patients’ unobtrusive personal health device, to a secure software platform integrated to the health ecosystem through Intuitive Health’s technology–emphasis placed on the confidential nature of the transmission of patient’s personal information.
“Innovation is desperately needed outside the four walls of the hospital,” said Eric Rock, CEO and Founder of Intuitive Health. “In order to increase our nation’s quality of care and gain control of our healthcare spending, patients of all ages and technical ability must be given intuitive tools to improve their own health, while remaining engaged and monitored by their caregivers remotely.”
In the April 2010 Position Paper on “Technologies for Remote Patient Monitoring in Older Adults” by the Center for Technology and Aging, it was hypothesized that the U.S. health care system could reduce costs by nearly $200 billion within the next 25 years if remote monitoring tools are utilized for chronic diseases. To be sure, figures are not easily discernible; the amount and types of people who choose to utilize such treatment cannot be easily predicted.
The collaboration between AT&T and Intuitive Health is not the first of its kind; and with the increasing popularity of Smartphones, it is reasonable to anticipate that mobile technology will play a role in rise of the use of remote patient monitoring services. It is, perhaps, however, worthwhile to reconsider Michael Ricciardelli’s related post written three years ago, as a way to evaluate the role technology has and may continued to play in areas of health reform.
Since the data breach notification regulations by HHS went into effect in September 2009, 385 incidents affecting 500 or more individuals have been reported to HHS, according to its website. A total of 19 million individuals have been affected by a large data breach since 2009. The regulations require a covered entity that discovers a reportable breach affecting 500 individuals or more to report the incident to the HHS Office of Civil Rights immediately. After an investigation, HHS publicly posts information about the reported incident on its website on what has become known as the “Wall of Shame.” Of the 385 reported incidents, there are six separate incidents each affecting a million individuals or more. In its 2011 annual report to Congress, HHS reported that in 2009 covered entities notified approximately 2.4 million individuals affected by a breach and 5.4 million individuals the following year. This number grew in 2011 and it will likely continue to grow in 2012. To date, the largest breach took place in October 2011 at Tricare, the health insurer of American military personnel, which affected 4,901,432 individuals after storage tapes containing protected health information (PHI) were stolen from a vehicle. These numbers are staggering, but fortunately more can be done and should be done to prevent data breaches.
Data breaches can cause great harm to the affected individuals, providers and institutions. Individuals may experience embarrassment and harassment because sensitive health information was released. Individuals are vulnerable to identity theft and financial fraud if personal information such as social security numbers were accessed. More frequently, institutions are offering credit monitoring services to affected individuals to monitor for potential fraud. Similarly, data breaches carry a very high cost for institutions that will have to spend great sums to investigate and report a breach to HHS, the media and the affected individuals. An institution or provider’s reputation can also be harmed through negative publicity and the loss of consumers. More institutions are hiring public relations teams after a breach to minimize the amount of fallout and negative publicity. The threat of litigation and class action lawsuits following a breach is also present and very real. Stanford Hospital, Tricare, and Sutter Health are all facing million and billion dollar class action lawsuits for their 2011 data breaches.
The bad news is that data breaches are impossible to predict and it is impossible to protect against every type of possible breach. Unfortunately, even the strongest policies, precautions and security measures cannot protect an entity from a hacker, thief or an employee or business associate’s honest mistake. As more providers and institutions adopt electronic health record systems and digitize their records, data breaches will continue to occur and large breaches will be spotlighted by the media. Pursuant to the regulations, a covered entity must alert a prominent media outlet if a reported breach affects more than 500 people of that state. Based on the events of last year alone, it is clear that the media loves to report on data breaches and will continue to do so. Hopefully this public exposure will serve to increase accountability to the public rather than instill fear in the public and hurt consumer confidence in the EHR movement.
The good news is that more can be done by providers and institutions to prevent harmful and costly data breaches. Data security and patient privacy should be the focus of the industry in the upcoming years because it is just as important as meaningful use certification. The benefits flowing from the Medicare incentive payments that an institution may receive under the Affordable Care Act can be canceled out in the event of a large and debilitating data breach. It would be wise for covered entities to focus on preventing data breaches as much as achieving meaningful use.
There is no easy solution to preventing breaches, but encryption is one surefire way an entity can better protect itself from a costly breach. As entities become more familiar with EHR systems and recognize the risks involved in storing and transferring PHI data, implementing encryption technology should become a top priority for each entity.
Encryption of PHI is a major step a provider or institution can take to secure its sensitive patient data. Encryption is the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key. According to a Guidance from HHS, if an entity encrypts its data in accordance with the National Institute of Standards and Technology standards for encryption, then any breach of the encrypted data falls within a safe harbor and does not have to be reported. This is an incredibly important safe harbor that could save an entity a lot of money. It is shocking that more entities, especially those with the means and resources to install a qualifying encryption system, do not utilize encryption technology on any of their electronic devices, especially portable devices.
Of the 385 reported breach incidents, thirty-nine percent involved a lost or stolen laptop or other portable media device containing unencrypted PHI. A report recently released by Redspin, an IT security firm, states that data breaches stemming from employees losing unencrypted devices spiked 525 percent in the last year alone. This statistic confirms that devices, including laptops, tablets and smartphones, pose a very high risk for a data breach. Redspin reported that eighty-one percent of healthcare organizations now use smartphones, iPads, and other tablets, but forty-nine percent of respondents in a recent healthcare IT poll by the Ponemon Institute said that nothing was being done to protect the data on those devices. At the very least, these reports and the statistics on HHS’s “Wall of Shame” should encourage entities to encrypt their portable electronic devices that contain sensitive PHI.
There are of course costs associated with adopting encryption technology in an EHR system. There are costs to install the system and maintain it with the help of an IT expert. Encryption of information can also slow down the processes used in sharing information. After all, one of the main goals of an EHR system is to make it easier for providers to share health information about their patients. An entity should work with an IT expert to determine what information should be encrypted in order to maximize the efficiencies of an EHR system. Despite the costs, the money and resources spent implementing encryption technology can be well worth it and are a smart investment for any entity with an EHR system. In a study published in 2011, the Ponemon Institute found that the cost of a data breach was $214 per compromised record and the average cost of a breach is $7.2 million. In light of the large data breaches that have been reported, it is clear that the costs of a breach can be much higher than the costs to implement encryption technology.
Under the HITECH Act and HHS’s interim final rule, encryption of health information is not mandatory. It remains to be seen whether HHS will impose a mandatory encryption policy on all devices or, at the very least, all portable devices capable of storing or transferring PHI, when it releases the final version of the data breach notification regulations sometime this year. The health care industry’s lack of encryption for patient information has drawn attention on Capitol Hill. At a November 2011 hearing before the Senate Judiciary Committee’s panel on Privacy, Technology and Law, Deven McGraw of the Center for Democracy and Technology testified that “we know from the statistics on breaches that have occurred since the notification provisions went into effect in 2009 that the healthcare industry appears to be rarely encrypting data.” At the hearing, Senator Tom Coburn, a physician himself, and Senator Al Franken, the chair of the panel, both voiced their concern over patient privacy protection and the current regulatory scheme. Senator Franken has said that he is contemplating legislation to encourage encryption by providers, although no action has been taken.
In the interim, it is reasonably clear that most, if not all, entities can benefit from implementing encryption technology when considering the costs and headaches associated with a data breach. When encryption is done properly, it has the potential of saving an entity a large sum of money, perhaps millions of dollars, in costs and fines — and that should be reason enough for entities to start taking this step in EHR technology.
In an effort to keep up with advancing technology, the Food and Drug Administration (FDA) has proposed new regulations to monitor medical smartphone applications (apps). The draft proposal states that any mobile app that is intended for use in performing a medical device function meets the definition of a medical device under the Federal Food, Drug, and Cosmetic Act. Specifically, these mobile medical apps must be either used as an accessory to a regulated medical device, or transform a mobile platform into a regulated medical device.
To further clarify what apps will be regulated, the document notes that a mobile app is a device “when the intended use of a mobile app is for the diagnosis of disease or other conditions, or the cure, mitigation, treatment, or prevention of disease, or is intended to affect the structure or any function of the body of man.” The guidance document explains how the intended use of mobile apps can be shown by labeling claims, advertising materials, or oral or written statements by manufacturers or their representatives.
The goal of the regulations is to protect patient safety, though to date there have been no adverse events reported to the FDA. The proposal seems to be forward looking in creating a framework for mobile app manufacturers. According to the Associated Press, there are already more than 17,000 medical applications currently available.
Physicians can use mobile phones to calculate prescription dosages, review disease treatment guidelines, and explain diagnoses and procedures to patients. The FDA expects that by 2015, 500 million smartphone users will rely on health care apps.
Two medical apps have already received FDA approval for use by physicians. The first is a prenatal care app called AirStrip OB. Cleared in 2009, the app allows obstetricians to use their phones to remotely access real-time data for mothers and babies. The second app, approved earlier this year, is Mobile MIM. This app allows hospitals and doctors offices to send images to physicians’ mobile devices. The FDA noted that the software should not be used to replace radiology workstations as the primary way to view medical images, but is useful when a physician has limited access.
Opinions from within the industry vary on the new guidelines. Some feel that the regulation is both necessary and welcome. By regulating the medical app industry, the FDA is offering market players clear guidelines for continued development. Others argue that the regulations may be too far reaching. For example, medical apps to calculate prescription dosages for patients are not new and are based on accepted formulas. Smartphone apps that achieve the same goal increase efficiency and do not put patients at risk, and therefore do not merit differential treatment.
The proposed regulations raise two main concerns for patients and physicians. The first is a privacy concern, similar to the drawbacks considered for other forms of Electronic Health Records. Transferring data between hospital systems and physician smartphones will increase confidentiality and security concerns. Once patient data is accessed on a smartphone, privacy may be easily breached should the phone be used by another person, lost or stolen. The second concern is that these proposed rules could increase the purchase price for medical apps. App developers will likely have increased costs for filing applications and seeking legal counsel, and those costs will be passed to end users.
The draft proposal is currently in an open comment period, and the FDA will amend the regulations after the comment period closes.
Last year I published a piece called “Beyond Innovation and Competition,” questioning the dominance of those values. Economists celebrate innovation and competition as the main source of future growth. Innovation has become the central focus of Internet law and policy. While leading commentators sharply divide on the best way to promote innovation, they routinely elevate its importance. Business writers have celebrated search engines, social networks, and tech startups as model corporations, bringing creative destruction and “disruptive innovation” in their wake. Maximum innovation is the goal, and competition is billed as the best way of achieving it. Players in the vast and dynamic tech marketplace are supposed to constantly strive to innovate in order to attract consumers away from rivals.
In the piece, I explain how both competition and innovation can be as destructive as they are constructive. There are many social values (including privacy, transparency, predictability, and stability), and companies can compete for profits in ways that erode those values. In an era of inequality and hall-of-mirrors stock market valuations, innovations of marginal or negative impact on society at large can be vastly overvalued by a stampede of fickle investors.
The shortcomings of the innovation and competition story also play out in health information technology. Stimulus legislation in 2009 provided many carrots and sticks for doctors to digitize their recordkeeping systems, ranging from bonuses now to reimbursement haircuts later this decade if they fail to implement the technology. Congress structured the incentives to encourage a competitive and innovative marketplace in health information technology. But many doctors are shying away from implementation, in part because they fear that the fast and loose ethics of the market can’t mesh with a medical culture of constant commitment to quality care.
Susan Jaffe’s article for the Center for Public Integrity examines doctors’ fears about adopting any given software suite. According to Jaffe, “570 different electronic health systems certified by private organizations for non-hospital settings may be used to qualify for the” stimulus funds. The long-term consequences of the choice make the jam-shopping examples in Barry Schwartz’s book The Paradox of Choice seem quaint:
The systems can vary in appearance, content, organization and special features. Some can be customized by users in different ways, at no cost or some cost, or not at all. Some are compatible with other systems now, eventually or, some critics say, maybe never. . . . The costs of the systems remain daunting, despite the bonuses, particularly in areas that have been hit hard by an ailing economy.
The pricetag varies widely depending on the type and size of the medical practice, whether new computers are purchased and the extent of customization, among other things. Software alone can cost from $2,000 to $10,000 per doctor. All told, the cost jumps to about roughly $20,000 per doctor, according to a regional extension center consultant who advises physicians in northeast Ohio. On top of that, manufacturers charge hefty annual fees for technical support and periodic upgrades that together can amount to about 35 percent of the upfront costs. The systems are priced in a way that does not make comparison shopping “easy or necessarily valid,” said Dottie Howe, a spokeswoman for the Ohio regional extension center. There is no basic price because each company offers different components, features, options, and level of technical support. . . .
Most manufacturers will also charge the doctors to move the information in their current system to the new one. There could be extra [ongoing, monthly] charges to connect to other systems too.
Doctors have also been burned by sharp operators that emphasize slick salesmanship over solid service:
[T]he Southwest Family Physicians group is worried . . . They bought an electronic health record system five years ago that is now nearly obsolete. The manufacturer was taken over by another company that provides minimal technical support . . . “The salesman said ‘you’re buying a Cadillac, this is going to be the greatest thing,’ ” [one doctor] recalled. But that system can’t display an X-Ray image or send a prescription electronically to a pharmacy. “We’ve got the Model T Ford,” he said.
It does appear that regional extension centers are doing some work to keep pricing reasonable. Jaffe’s article focuses on Ohio, where five “preferred vendors” “agreed to charge prices ‘as good as or better than’ prices offered to other regional extension centers, to provide onsite assistance when a practice turns on its electronic health record system for the first time, offer technical support for at least six years, and limit annual cost increases for continuing technical support, among other things.” But consider the bizarrely proprietary nature of pricing data:
Whether the five preferred vendors offer a better deal than their non-preferred competitors is not known because the state regional extension center doesn’t have pricing information from non-preferred vendors, said Howe, the spokeswoman for the state’s regional extension center. Pricing from the preferred vendors are confidential, she said. And despite their preferred status, the five companies do not guarantee that eligible health care providers who purchase their systems will receive the government’s bonus payments.
I discussed the troubling degree of secrecy in health care before, and I’m very sad to see it persist here. The doctors in Jaffe’s story are making reasonable demands: to be able to understand the nature of the commitment they are making, to avoid big financial losses, and not to be burned by fly-by-night operators attracted only by the government subsidy money. They want to assure that the basic health care values of access, cost-control, and quality are reflected in the software they use.
We are seeing the opening stages of a battle between a medical sector committed to maintaining its own autonomy and traditions, and a tech sector that wants to commoditize health data in as standardized a form as futures markets homogenized corn grades, or credit scores tranched residential mortgage backed securities. Commenting on the demise of Google Health, an informatics expert said that “Google is unwilling, for perfectly good business reasons, to engage in block-by-block market solutions to health-care institutions one by one, and expecting patients to actually do data entry is not a scalable and workable solution.” To be sure, the company can’t expect to make the same profit margins in the health sector as it does in the online ad business. But the “instant millions” ethos of Silicon Valley doesn’t fit well with a sector where we are in principle committed to serving everyone, regardless of ability to pay.
Economist John Van Reenen has observed that the US has a particularly innovative economy in part because our markets are so good at crushing badly run firms. It’s probably good that garden equipment suppliers, toothpaste makers, and pie bakers know they can be out of business in a month or two if they’re “off their game” for a short time. But if I just entrusted three years of medical records to a vendor who suddenly went out of business, I’d take little comfort in the idea that a marginally better competitor had knocked it out of the market. The transition to a new vendor can be slow and costly—doctors in Jaffe’s story speak of seeing 1/3 to 1/2 less patients over weeks or months as they learn a new system.
At a Yale SOM Health Care conference in 2009, the Chief Medical Officer of a major player in the field once remarked to me that choosing an HIT vendor is “like a marriage—you don’t end the relationship lightly.” I first thought that remark was self-serving. But the more one examines the HIT field, the more important it appears to get standard recordkeeping, support capabilities, and interoperability right at the outset, rather than leaving doctors to negotiate the wreckage of several generations of battling systems. Think about how chaotic online music sales seemed before iTunes. Perhaps Apple (whose iPads are already beloved by many docs) is going to bring a swift and highly profitable order to this field, too. I hope the ONC and other decisionmakers will well-regulate whatever behemoth eventually emerges, vindicating the public values that competition and innovation are unlikely to promote.
Photo credits to Aleksandar Šušnjar, Jakub Halun and loki11.
I look forward to reconnecting with everyone who is attending the health law professors conference in Chicago. My presentation will be applying some of the ideas of Scott Peppet (on self-quantification and unraveling) to personal health records. I found these ideas from Peppet’s post on biometric identification particularly interesting:
The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. . . . [T]he company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)
I’ve previously looked at the limits of individualist accounts of autonomy in work on pharmaceuticals (here and here), and scholars like Robert Ahdieh are questioning individualism in law & economics generally. As Nic Terry has argued, many of the critiques of CDHC apply to PHRs, and vice versa.
As of a few years ago, “it wasn’t illegal to hire and fire people based on their smoking habits” in 21 states. I think there will be many difficult questions raised in coming years by the growth of medical records of all types, and how many secondary uses of them are permitted. For example, some dating sites will now verify the income and assets of their users. How soon before they (and other certification and evaluation intermediaries) start vouching for health profiles? Does law have a role in these situations? I’ll try to explore these questions, and I’ll post more details about the presentation after getting some feedback.
The Washington Post recently featured Lena Sun’s reporting on why many physicians are wary of adopting an electronic medical records system. As noted in the piece,
Many are aware that beginning this year, health-care professionals who effectively use electronic records can each receive up to $44,000 over five years through Medicare or up to $63,750 over six years through Medicaid. But to qualify, doctors must meet a host of strict criteria, including regularly using computerized records to log diagnoses and visits, ordering prescriptions and monitoring for drug interactions. And starting in 2015, those who aren’t digital risk having their Medicare reimbursements cut.
Deven McGraw, director of the health privacy project at the Center for Democracy & Technology, complains that, despite all these requirements, patient confidentiality concerns are being neglected:
But no federal regulations clearly require that doctors turn the data encryption on or prevent those who don’t do so from getting paid. . . . “This is a point of frustration,” said McGraw, who sits on an advisory group that sought unsuccessfully to prevent those who violate privacy regulations of the federal Health Insurance Portability and Accountability Act, or HIPAA, from getting incentive money.
Some older doctors may find it easier to retire than to get on board with new EMR systems. We frequently hear complaints about Luddite doctors resisting technology that has long been adopted by other sectors. But, as one commentator recently insisted, a doctor is not a bank. To get a sense of how frustrated doctors can become because of the new health IT (and the legal contracts that accompany it), check out this parody website for the faux firm Extormity. It announces a memorable experience for doctor clients/conscripts:
At the confluence of extortion and conformity lies Extormity, the electronic health records mega-corporation dedicated to offering highly proprietary, difficult to customize and prohibitively expensive healthcare IT solutions. Our flagship product, the Extormity EMR Software Suite, was recently voted “Most Complex” by readers of a leading healthcare industry publication.
I loved this description of a firm committed to maximizing the value of it’s intellectual property:
The Extormity EMR Software Suite is built on a proprietary software model renowned for its complexity. This proprietary platform and all of its components must be procured and implemented as a complete package we call the Extormity BundleTM (which describes both our comprehensive package and its associated cost).
Operating the Extormity Bundle requires a phalanx of servers, which of course need to be replicated for redundancy. Fortunately, Extormity acts as a value-added reseller of these servers, which we pre-load with operating software. This allows us to mark-up the cost of the servers and charge for server configuration. In addition, the server software carries with it steep annual license fees.
Let’s hope the ONC’s ongoing regulatory process can help reduce the risk of Extormity-style raw deals for doctors. Given the recent flap over the FDA’s effective imprimatur for an extreme drug price increase, no DC agency should set in motion a process that could lead to prohibitively expensive fees for an essential aspect of health care.
X-Posted: Health Law Prof Blog.
Filed under: Electronic Medical Records, EMR, Private Insurance
[Ed. note: This is the second part (perhaps evident from the title) of a two part post. Though each could well stand on its own, the first part can be found here.]
Insurance Reporting and Classification
Reporting requirements may not seem like a notable accomplishment. Nevertheless, the trend toward monitoring the products and services offered by insurance companies is an important step toward accountability. HHS needs to impose some order, some translatable logic, on fields that have threatened to become enormously parasitic and unproductive by or masking the true nature of their commitments.
Consider the practical illegibility of the average insurance plan. A vanishingly small number of subscribers actually read such plans. A plan may have complex cost-sharing requirements that vary among in-network and out-of-network primary care doctors, specialists, surgeons, hospitals, and procedures. While a “great risk shift” makes consumers all the more responsible for their choices in health care, it’s hard to imagine anyone accurately mapping the true fiscal consequences of given disease episodes in an aggressively complex plan.
By setting “a minimum level of health benefits, called the essential health benefits, that must be offered by certain health plans.” As Jessica Mantel explains, the term “‘essential health benefits package’ means coverage that not only provides for the essential health benefits defined by the secretary, but also limits cost-sharing for coverage of the essential health benefits in accordance with the parameters specified in the statute.” The Cancer Action Network has applauded the ACA for promoting “more standardization in the scope and value of private health insurance coverage available.”
Medical loss ratios have long been of interest primarily to investors. An insurer that could achieve a low MLR by holding down expenditures on health care for its enrollees was a good investment. . . . On November 22, 2010, the Department of Health and Human Services released its interim final rule implementing the requirements of the new section 2718 of the Public Health Services Act (added by section 10101 of the Affordable Care Act), entitled, “Bringing Down the Cost of Health Care Coverage.” This provision is usually referred to as the “medical loss ratio” (or MLR) requirement . . .
Section 2718 requires health insurers (including grandfathered but not self-insured plans) to report to HHS each year, the percentage of their premium revenue that the insurer spends on 1) clinical services for enrollees, 2) “activities that improve health care quality,” and 3) all other non-claims costs, excluding federal and state taxes and licensing or regulatory fees. . . .
Jost describes in details how the classification works, and how it is designed to encourage more responsible insurer behavior.
Setting a Standard for Electronic Medical Records
Electronic health records systems will also need to develop shared data management standards. EMR vendors long argued that they needed flexibility to innovate in order to best reflect doctors’ practices and improve the capture of medical information. However, there is a tension between untrammeled innovation by vendors at any given time and later, predictable needs of patients, doctors, insurers, and hospitals to compare their records and to transport information from one filing system to another.
One system may be able to understand “C,” “cgh,” or “koff” as “cough,” and may well code it in any way it chooses. But to integrate and to port data, all systems need to be able to translate a symptom into a commonly recognized code. Health care providers can only avoid getting “locked into” a system if they can transport their records from one vendor to another. Patients want their providers to seamlessly integrate records.
HHS rulemaking has lain a groundwork for this type of common language of medical recordkeeping. As Sharona Hoffman and Andy Podgurski explain,
To address this problem, it is necessary for all vendors to support what we will call a “common exchange representation” (“CER”) for EHRs. A CER is an artificial language for representing the information in EHRs, which has well defined syntax and semantics and is capable of unambiguously representing the information in any EHR from a typical EHR system. EHRs using the CER should be readily transmittable between EHR systems of different vendors. The CER should make it easy for vendors of EHR systems to implement a mechanism for translating accurately and efficiently between the CER and the system’s internal EHR format.
There are also important opportunities for standardization in the security field:
As is true for a common exchange format, standardized security policies and mechanisms are unlikely to be adopted by vendors and providers without a regulatory mandate. In order to facilitate compliance and provide vendors with clear guidance, the regulatory mandate might incorporate, by explicit reference, some established and emerging security standards, such as the Internet Engineering Task Force’s Transport Layer Security (“TLS”) standard or its Public-Key Infrastructure (X.509) standard.
The discussion can quickly become technical, and it is difficult to explore all the ins and outs of the process. But the underlying purpose is clear: to develop some standard forms of interacting in a realm where “spontaneous order” is unlikely to arise and “network power” could lead to lock-in.
Of course, there are important differences between the EHR and health insurance landscapes. Symptoms refer to conditions that are, by and large, objective. (One can even imagine ubiquitous video cameras and sensors creating something like a complete patient record (or medical life log) for patients who consent to that type of monitoring.) Insurance contracts, by contrast, do not have the same “ontological firmness.” They must contemplate vague and open-ended spells of illness.
Nevertheless, a process similar to common exchange representation is now going on in the consumer affairs office of HHS. As the Office of Consumer Information and Insurance Oversight lays ground rules for ACA implementation, it must decide on some basic questions: what counts as insurance? What is a deductible? The ultimate goal is to require insurers to convey with far more precision what services they truly cover. The health insurance and health IT landscapes will only become governable when practices are nameable, classifiable, and comparable.
X-Posted: Concurring Opinions.
Filed under: Electronic Medical Records, EMR, Private Insurance
I was recently listening to Health Affairs’s “Newsmaker Breakfast with Karen Pollitz.” She gave a fascinating presentation on the challenges she faces as she develops HealthCare.Gov as a portal for information about health insurance. As I noted a few years ago, health insurers can easily mislead consumers about the nature of their coverage, and disclosure charts can be very helpful.
But even disclosure charts run up against the slipperiness of language. Pollitz noted that for some plans, a “deductible” was not really a deductible; you could easily spend much more out-of-pocket on health care than the stated “deductible level” before coverage kicked in.
How can an individual make an informed choice when words lose their meaning in a tangle of qualifications and conditions? At what point does a deductible cease being a deductible? While this might seem like a relatively technical question of insurance regulation, it is reflects a more general information-gathering problem that will confront regulators in coming years. Scientists could only predict and control aspects of the natural world when they could be named and classified. Any successful regime of healthcare reform will depend, at a bare minimum, on a flexible yet standardized classification system that can map what health insurers are doing. Like Linnaeus patiently organizing a welter of living forms, regulators will need to taxonomize pullulating permutations of insurer practices.
The Rise of Health Care’s Middlemen
The United States leads the world in payments to private insurance providers. The industry has extraordinary power over access to health care. In 2010, long-standing dissatisfaction with the sector culminated in the Patient Protection and Affordable Care Act (ACA). Congress rejected changes like a public option in healthcare, in favor of a complex and reticulated statutory scheme to better regulate insurers. There have not been dramatic changes in the way that health insurance companies are run, and their stock prices tended to rise as reform became more certain.
The ACA has set in motion dozens of regulatory proceedings. The government also allocated $20 billion toward equipping all medical offices with electronic health records in the 2009 stimulus bill, the American Reinvestment and Recovery Act. Health regulators must now try to catch up with technologically advanced intermediaries in insurance and IT fields.
Immediately after the ACA passed, naysayers on both left and right complained that divisions like OCCIO were unprepared for their new regulatory roles. Perhaps the most compelling case for repealing the ACA is a belief that regulatory agencies will inevitably be captured, or overwhelmed with information from far far better funded attorneys and lobbyists representing insurance and IT firms.*
Nevertheless, the ACA has catalyzed one very important process: the development of an infrastructure of monitoring and reporting that will be necessary for any future informed regulation. It’s shocking to consider how inadequate past reviews were here. As of 1997, the “US Department of Labor had resources to review each employer-sponsored group health plan under its jurisdiction once every 300 years.” The Bush years did not significantly address that shortage. Moreover, “state insurance department staff levels declined 11% in 2007 while premium volume increased 12%.” The personnel simply haven’t been around.
Starting essentially from scratch, Pollitz and her fellow regulators are engaging in a painstaking rebuilding of the foundations necessary for substantial regulation. Having long neglected even to closely monitor the sharp practices of health insurers, federal regulators are now beginning new programs of surveillance.**
*The latter point does appear to be valid with respect to the public record now being compiled in dozens of rulemaking processes. In rule after rule, industry comments overwhelmingly dominate public interest or academic contributions. It’s sad to think that groups like Campaign for America’s Future, or labor unions, having spent so much time getting the ACA passed, are now ceding much of the regulatory field to insurers. On the other hand, given the Administration’s recent appointments, and recent McSurance waivers, who knows whether good comments would have an impact.
X-Posted: Concurring Opinions.
In an article entitled “Monitoring America,” Dana Priest and William Arkin describe an extraordinary pattern of governmental surveillance. To be sure, in the wake of the attacks of 9/11, there are important reasons to increase the government’s ability to understand threats to order. However, the persistence, replicability, and searchability of the databases now being compiled for intelligence purposes raise very difficult questions about the use and abuse of profiles, particularly in cases where health data informs the classification of individuals as threats.
First, a little background. We traditionally think of law enforcement as needing some kind of probable cause to ground or justify the pursuit of an investigation. However, with the rise of the new Information Sharing Environment (often enacted by fusion centers, which provide one-stop shopping for access to data), a much broader set of law enforcement prerogatives is emerging. Fusion centers have promoted a domestic intelligence apparatus, which is designed not merely to solve crimes but also to generate a wide range of knowledge which could lead to the deterrence and detection of “all threats, all crimes, all hazards.”
The Department of Homeland Security has taken a number of innovative steps to deputize monitoring of individuals, asking personnel ranging from local law enforcement to cable repairmen to hotel cleaners to be on the alert for suspicious activity. Once such activity is detected, the detector can in some cases file a persistent Suspicious Activity Report. These SARs are entered into an FBI database, and quite possibly inform many other counterterror, intelligence, and even private sector initiatives. Arkin & Priest’s story gives a sample Suspicious Activity Report, and speculates about how its creation may affect the object of the profile:
The FBI is building a vast repository controlled by people who work in a top-secret vault on the fourth floor of the J. Edgar Hoover FBI Building in Washington. This one stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. What they have done is appear to be acting suspiciously to a town sheriff, a traffic cop or even a neighbor.
[For an example of what might go in the database, consider] Suspicious Activity Report N03821 says a local law enforcement officer observed “a suspicious subject . . . taking photographs of the Orange County Sheriff Department Fire Boat and the Balboa Ferry with a cellular phone camera.” The confidential report, marked “For Official Use Only,” noted that the subject next made a phone call, walked to his car and returned five minutes later to take more pictures. He was then met by another person, both of whom stood and “observed the boat traffic in the harbor.” Next another adult with two small children joined them, and then they all boarded the ferry and crossed the channel.
All of this information was forwarded to the Los Angeles fusion center for further investigation after the local officer ran information about the vehicle and its owner through several crime databases and found nothing. Authorities would not say what happened to it from there, but there are several paths a suspicious activity report can take:
At the fusion center, an officer would decide to either dismiss the suspicious activity as harmless or forward the report to the nearest FBI terrorism unit for further investigation. At that unit, it would immediately be entered into the Guardian database, at which point one of three things could happen:
The FBI could collect more information, find no connection to terrorism and mark the file closed, though leaving it in the database. It could find a possible connection and turn it into a full-fledged case. Or, as most often happens, it could make no specific determination, which would mean that Suspicious Activity Report N03821 would sit in limbo for as long as five years, during which time many other pieces of information about the man photographing a boat on a Sunday morning could be added to his file[.]
[That data includes] employment, financial and residential histories; multiple phone numbers; audio files; video from the dashboard-mounted camera in the police cruiser at the harbor where he took pictures; and anything else in government or commercial databases “that adds value,” as the FBI agent in charge of the database described it. That could soon include biometric data, if it existed; the FBI is working on a way to attach such information to files. Meanwhile, the bureau will also soon have software that allows local agencies to map all suspicious incidents in their jurisdiction.
Given the expansive reservoirs of data already accessible to fusion centers, I would not be surprised if they took the position that health records “add value” to the data gathering. Civil libertarians can object to many types of data gathering, but for purposes of this post, I would like to focus on healthcare data. First, to what extent can a health condition itself give rise to a Suspicious Activity Report? Secondly, are there any concerted efforts to deputize medical personnel to report on suspicious activity? Finally, and I believe most importantly, how is the vast store of healthcare data presently associated with individuals utilized by the data mining programs of the surveillance state?
We daily learn of troubling data gathering practices online. For example, Arvind Narayanan has described rather indiscriminate data gathering by third parties:
The Facebook “like” button is a prominent . . . example of third-party tracking not directly related to behavioral advertising. . . . Facebook can keep track of all the pages you visit that incorporate the button, whether or not you click it. Did you know, for example, that the UK National Health Services website has the like button, among other trackers, on all their disease pages?
One need only visit the Wall Street Journal’s recent series on privacy to realize that all manner of health-related data can be generated about an individual with little to no restrictions imposed by HIPAA or effectively enforced by the FTC. To take one example, consider the scraping (copying) of data at a site called PatientsLikeMe:
At 1 a.m. on May 7, the website PatientsLikeMe.com noticed suspicious activity on its “Mood” discussion board. There, people exchange highly personal stories about their emotional disorders, ranging from bipolar disease to a desire to cut themselves. It was a break-in. A new member of the site, using sophisticated software, was “scraping,” or copying, every single message off PatientsLikeMe’s private online forums.
Who knows how many incidents like this go unreported each year? Finally, the government itself is keeping a record of prescription drug use, which apparently was used after the Virginia Tech shooting. Law enforcement exceptions to HIPAA (and, presumably, HITECH) may give an official imprimatur for similar activities even if they involve “covered entities.”
The clash of intelligence prerogatives and health privacy always raises difficult issues. For now, I would just like to make one claim about the need for the government to be forthright about whether it is collecting health care data while profiling citizens. Such data gathering should not be what David Pozen calls a “deep secret;” that is, citizens should not be “in the dark about the fact that they are being kept in the dark.” Rather, we need to understand whether this very personal and important data is being commandeered to fight an “enemy within.”
There are broader principles for fair disclosure of the workings of the surveillance state. First, people are all too eager to sign up for new health “apps” and affinity groups without having any sense of how these activities and affiliations can affect their future. There is still a lazy public/private distinction affecting far too much of consumer conduct; I hear so-called internet experts wondering why anyone would worry about data stored by a private company because “they’re not the government.” Arkin & Priest have consistently shown that the public/private distinction is evanescent at best, a confounding development in social affairs that leaves libertarians sounding like communists.
Julie Cohen’s recent article in Social Research observes that there is a much larger political economy of surveillance that has accelerated both data gathering and profiling:
Devaluation of privacy is bound up with our political economy and with our public discourse about information policy in important ways that have little or nothing to do with official conduct. . . . Flows of data are facilitated by corporate data brokers like ChoicePoint, Experian, and Axciom. To help companies (and governments) make the most of the information they purchase, an industry devoted to “data mining” and “behavioral advertising” has arisen; firms in this industry compete with one another to develop more profitable methods of sorting and classifying individual consumers.
In the United States, a number of federal agencies have awarded multimillion dollar contracts to corporate data brokers to supply them with personal information about both citizens and foreign nationals. Privacy restrictions that limit the extent to which the government can itself collect personal information generally do not apply to such purchases at all. The government has deployed secrecy to great effect where these initiatives are concerned, with the result that we still understand too little about many of them. Legal regimes purporting to guarantee official transparency are in fact indeterminate on how much openness to require.
These processes let important decisionmakers in both the private and public sectors exist behind a “one way mirror.” Even if full transparency would compromise data gathering, citizens must know whether certain critical information (including health data) is being commandeered by the domestic intelligence apparatus.
I recently gave remarks as part of a panel at the roundtable “Personal Health Records: Understanding the Evolving Landscape,” sponsored by the Office of the National Coordinator for Health Information Technology (ONC). There were many interesting speakers, including some of the leading businesses in the PHR space and regulators from FTC, HHS, and the California state Office of Privacy Protection. The roundtable exposed the promise–and limits–of a personalized health record model. Databases may help both public health and patient care, but the many stakeholders in PHR’s may have very different views about how much control patients should have over the presentation of their medical selves in everyday life.
Discussions about health records can get forbiddingly abstract and technical, but a real-world dilemma can help concretize the problem. As Lisa Wangsness’s Boston Globe article shows, at least one individual feels “burned” by his effort to quickly port past data into a PHR:
When Dave deBronkart, a tech-savvy kidney cancer survivor, tried to transfer his medical records from Beth Israel Deaconess Medical Center to Google Health, a new free service that lets patients keep all their health records in one place and easily share them with new doctors, he was stunned at what he found. Google said his cancer had spread to either his brain or spine — a frightening diagnosis deBronkart had never gotten from his doctors — and listed an array of other conditions that he never had, as far as he knew, like chronic lung disease and aortic aneurysm. A warning announced his blood pressure medication required “immediate attention.” “I wondered, ‘What are they talking about?’ ” said deBronkart . . .[He] eventually discovered the problem: Some of the information in his Google Health record was drawn from billing records, which sometimes reflect imprecise information plugged into codes required by insurers.
According to one doctor consulted by the Globe, “an inaccurate diagnosis of gastrointestinal bleeding on a heart attack patient’s personal health record could stop an emergency room doctor from administering a life-saving drug.” For the critically or chronically ill, the record is literally a life-or-death matter.
Admittedly, the level of personal control an individual has over a PHR also offers a solution to this problem. If we follow the same model as credit reporting, patients should be able to review their reports without charge, and make corrections. The Markle Foundation has done a superb job highlighting the importance of accountable health technology. But, as the Center for Democracy and Technology argues, rulemaking on EHRs will need to build in a number of consumer safeguards to assure that other stakeholder interests do not trump patients’ interests.
The CDT recommends that HHS require “PHR providers to provide opportunities for consumers to amend, correct or annotate information in a PHR,” and “to have policies for handling disputes concerning information in the PHR.” CDT expands on the obligation in these paragraphs:
Many PHRs contain data from two categories of sources: copies of information obtained from members of the traditional health system (including health care providers, insurers, etc.) and data generated or acquired by consumers themselves, whether directly entered by them, or fed into the PHR by devices or
other sources that are not part of the traditional health care system (including data from a monitoring device that the consumer operates, from a commercial Web site, or from a consumerʼs own health-related observations).
Policies governing disputes about the validity of data should draw a distinction between these different categories of data. With respect to copies of data that users might not be permitted to change directly (including but not limited to data that originates with members of the traditional health system), users should be given a way to attach notes or complaints to the PHR disputing the validity of the data – and the note should remain appended to the data any time it is disclosed from the PHR. (This is similar to how the HIPAA Privacy Rule treats patient amendment of data in covered entity records.) PHR vendors also should consider mechanisms for communicating patient disputes about data back to the original source for consideration.
Even in a world where PHR’s are ubiquitous, there’s almost certainly going to be some “objective health record” in the medical system about any individual. (And, if key software engineers get their way, there will be a unique “personal health identifier” for everyone once health records systems are up and running.) So why should the integrity of PHRs matter to anyone other than the person recording them?
First, the more legible, portable, and useful PHRs are, the more they may displace other records of patient information. Emergency rooms may only have a chance to look at one HR–the one given to them by the patient they are treating.
Second, we can assume that as PHR’s become a bigger part of larger employers’ cost-control programs, they are going to want to make sure that “quantified selves” are accurately reporting their health efforts and achievements. Health reform has taken a “preventive turn,” and the ACA gives employers new latitude to reward and punish employees:
Although it prohibits insurers from charging higher premiums based on an individual’s health risks, it allows them to charge a smoker as much as 50 percent more than a nonsmoker. It also permits employers to increase rewards for participation in wellness and disease-prevention programs from 20 percent to 30 percent of the costs of insurance premiums.
To verify participation, an employer may want access to an employee’s PHR, particularly if it is much easier for its own computer systems to read and understand than the “objective health record” existing in the health care system itself. Yet the employer may also want to ensure that the PHR is populated by materials validated by third parties (such as doctors’ offices, fitness clubs, scales, or blood sugar monitors). Presently, this is not a major issue; as Nicolas Terry warns, “sharing or exchange of data between PHRs and providers or their EMRs is as speculative as it is controversial.” However, technological advances could promote PHRs with inputs from providers, apps, and even RFID chips. What happens if the employer tries to condition participation in a wellness program on an employee’s agreement not to try to change whatever is reported by those “trusted” third parties?
The CDT suggests some principles that should guide this situation as well. They recommend that:
Employers, health plans, and others should be explicitly prohibited from requiring individuals to open PHR accounts as a condition of employment, membership, or for any other reason. PHR accounts should also not be routinely opened for consumers who do not explicitly activate them, as this can expose personal data to uses not necessarily anticipated by the consumer. Similarly, consumers should not be compelled to disclose the information held within the PHR, or whether they are using a PHR, without due process of law.
I believe these “compulsion” points should go beyond the decision to open a PHR, to the more granular rights and responsibilities associated with the maintenance of one. However many times employers sing the praises of contract law, the truth remains that employees in this tight labor market have very little bargaining power. That’s one reason why Nicholas P. Terry’s recommendation of inalienable rights to control data in the PHR context was one of the most provocative and compelling comments at the roundtable.
I am not here advocating for complete autonomy of the patient over records in all contexts. As Sharona Hoffman has argued, in the realm of treatment, there are important rationales for prioritizing the independent medical judgment of professionals whose first obligation is to maintain health:
If patients are empowered to opt out of EHR use or to disallow treating physicians’ access to their records, they may lose much of the benefit of computerization. Many clinicians would continue to care for patients in ignorance of essential facts that could make the difference between appropriate and inappropriate treatment decisions. For example, it might seem at first blush that most physicians would not need access to a patient’s psychiatric records. However, a psychiatric diagnosis may help other specialists better understand the patient’s symptoms, and the patient’s complete drug list, including psychiatric drugs, is vital for purposes of safely prescribing additional medications.
Some commentators at the roundtable also offered creative solutions for the “sensitive health data” conundrum raised by Hoffman; for example, a patient could include an “envelope” in their EHR or PHR that would only be opened in case of emergency, or when authorized directly by the patient. Regardless of how one feels about this issue, outside the treatment context, it is critical for consumers to have reasonable opportunities to review, correct, and withhold their personal health records.
When all is said and done, people have to “buy in” to EHR for it to work effectively, and rational individuals are going to avoid any system where medical history can be as effective as credit history at denying them opportunities. One commentator at the roundtable said that her patients “didn’t care” about health data or security; they just wanted some quick and dirty method of digitizing their records. However compelling this perspective may seem for those “on the front lines,” the perils of “wikileaked world” should end any complacency about the use and misuse of computer records. We should avoid the temptation of letting cut-rate or subpar EHR and PHR systems develop, especially since they are likely to target the most vulnerable patients. Robust regulatory requirements can spark a race to the top for data privacy and security.
In the film Sleep Dealer, a laborer encounters a “memory recorder,” a computerized transcription machine that translates past experiences into video re-enactments. The machine occasionally blanks out as the laborer narrates his story, and its operator chides him to “be more truthful,” to hew closer to the actual truth of the matter. The film is ambiguous as to whether the machine, its operator, or the laborer himself have real access to what actually happened. In the treatment context, best practices may inevitably consign us to a messy, multi-stakeholder effort to set forth the “real truth” of a health record. However, the personal health record should be primarily a project of the person it describes, with no undue influence from the growing number of reputation raters and shapers with a pecuniary interest in particular representations of that person.
Did you know that buying generics instead of brands could hurt your credit? Or that a subscription to Hang Gliding Monthly could scare off life insurers? Or that certain employers’ access to electronic health records could lead them to classify you as “high-risk” or “high-cost”?
In all these cases, firms use “predictive analytics” to maximize profits. Consumers are the guinea pigs for these new “sciences” of the human. As Scott Peppet argues, it becomes more difficult to opt out of analytics systems as more people use them. What type of world are they leading us to?
Credit Analytics: Should Frugality be Punished?
One credit analytics company determined that buyers of cheap automotive oil were “much more likely to miss a credit-card payment” than those who paid for a brand-name oil. Spending on therapy sessions may also be a red flag. Appearing too frugal, too anxious, too spendthrift—all might lead to higher interest rates or lower credit limits. One R&D head at a credit analytics firm bragged that they consider over 300 characteristics to discover delinquency risk. He was not nearly as forthcoming about how the data is aggregated. Analyzing millions of transactions, the companies observe customers as a gardener might observe a rose garden: weeding out unpromising specimens, and giving a boost to incipient flourishers.
Many have complained about inaccuracy in these new forms of profiling, and consumers’ inability to review and correct digital dossiers collected about them. But let’s just assume that this profiling is correct, and choosing a generic really does correlate with increased credit risk. What’s the social value of this discovery? Maybe credit card companies can reduce rates infinitesimally (and increase profits) by burdening the generic buyers. But I’d be willing to bet that, for every few people whose generic purchases indicate financial trouble, there is another shopper who’s wisely frugal and increasing her chances of successfully repaying all her loans. It seems very odd to penalize the financially responsible merely because they happen to engage in an activity shared by the distressed.
The Dream of the Perfect Profile
Ahh, predictive analysts might reply, you just oversimplify our process. We would never reduce the credit line of someone who purchases generics if that person also, say, has a subscription to Travel and Leisure, or drives a Nexus, or gives over $1,000 a year to the Republican National Committee. They’re not desperate—they’re just careful shoppers. The more information we have, the more fair and accurate we can be. (I can only propose this response, since the industry is so careful about protecting its trade secrets. But this seems like a plausible counterargument.)
Just as free speech advocates often say that the answer to “bad speech” is more or “counter” speech, predictive analysts may argue that the cure for the mistreatment of any given individual is more information about the person’s true motives or opportunities. If privacy advocates are worried that certain surveillance practices will unfairly tarnish the reputation or profile of an individual, the answer is more, not less, information, on that person. The more comprehensive a picture that firms can develop of the individual, the better they are able to properly target resources.
Whatever the merits of this approach, it appears to me that it only applies to one dimension of the credit analytics example above. Rewarding “brand buyers,” in general, is not that likely to alter behavior in ways that could seriously undermine someone’s quality of life. But effectively punishing those who seek therapy or marriage counseling creates a different set of concerns, showing once again the ways in which health care decisionmaking needs to be distinct from the Procrustean forces of market pressures.
Stressed by Sickness in the Risk Society
A recent article by Sharona Hoffman illuminates some problems with pervasive use of health data in predictive analytics.
Employers may obtain and process EHRs [electronic health records] for a variety of reasons. Many require applicants who have received employment offers to provide authorizations for release of medical records in order to verify the individuals’ fitness for duty. At times, employers require records for purposes of workers’ compensation claims, reasonable accommodation requests by individuals with disabilities, or Family Medical Leave Act (FMLA) requests. Employers who are self-insured also process employees’ medical data in order to pay insurance claims.
EHRs will likely provide employers with unprecedented amounts of data. . . . Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers. . . . Employers with access to EHRs containing a wealth of medical information may be sorely tempted to exclude certain individuals from the workforce because of concerns about the employees’ future productivity, absenteeism, or medical costs. To disguise unlawful conduct, employers may not act immediately to withdraw a job offer or terminate an employee, but rather, decide not to promote an individual with a disability or to select her for a layoff at a later time.
In other words, predictive analytics in health can lead to more “death spirals” for the sick: lost employment, lost insurance due to that lost employment, and future inability to find work due to poor health. Hoffman’s concerns about employers sidestepping relevant regulations were reflected in today’s WSJ article on insurance profiling, too:
[G]iant data-collection firms . . . sort details of online and offline purchases to help categorize people as runners or hikers, dieters or couch potatoes. They scoop up public records such as hunting permits, boat registrations and property transfers. They run surveys designed to coax people to describe their lifestyles and health conditions. Increasingly, some gather online information, including from social-networking sites.
For insurers and data-sellers alike, the new techniques could open up a regulatory can of worms. The information sold by marketing-database firms is lightly regulated. But using it in the life-insurance application process would “raise questions” about whether the data would be subject to the federal Fair Credit Reporting Act, says Rebecca Kuehn of the Federal Trade Commission’s division of privacy and identity protection. The law’s provisions kick in when “adverse action” is taken against a person, such as a decision to deny insurance or increase rates. The law requires that people be notified of any adverse action and be allowed to dispute the accuracy or completeness of data, according to the FTC. Deloitte and the life insurers stress the databases wouldn’t be used to make final decisions about applicants. Rather, the process would simply speed up applications from people who look like good risks.
Many aspects of FCRA have been rendered irrelevant by the all-importance of credit scoring—it’s hard to care too much about one’s ability to “correct” one’s credit report if the only thing that really matters is a score whose calculation only contingently depends on any given piece of information in the report. But I had not heard before Deloitte’s assurance that information would “simply speed up” applications, and not “be used to make final decisions.” Quite the creative lawyering behind that distinction.
Relating the Real and the Digital Body
Dan Solove has written extensively on the “digital person,” and perhaps we can see predictive health analytics as an effort to create a “digital body.” As the WSJ reports, we are reaching a point where online “data can reveal nearly as much about a person as a lab analysis of their bodily fluids.” The least we can ask is for the purveyors of data-driven decisionmaking to be much clearer about how they profile individuals. Moreover, in the case of employment, we should seriously consider expanding disability discrimination laws to prevent employers from stratifying employees based on health data. Profits are important, but they shouldn’t come at the expense of sick people who already have enough problems to contend with. As HHS implements PPACA’s promotion of “wellness programs” at workplaces, they should also try to avoid the “Orwellness” of data-driven health profiling.
X-Posted: Concurring Opinions.
Computational innovation may improve health care by creating stores of data vastly superior to those used by traditional medical research. But before patients and providers “buy in,” they need to know that medical privacy will be respected. We’re a long way from assuring that, but new ideas about the proper distribution and control of data might help build confidence in the system.
William Pewen’s post “Breach Notice: The Struggle for Medical Records Security Continues” is an excellent rundown of recent controversies in the field of electronic medical records (EMR) and health information technology (HIT). As he notes,
Many in Washington have the view that the Health Insurance Portability and Accountability Act (HIPAA) functions as a protective regulatory mechanism in medicine, yet its implementation actually opened the door to compromising the principle of research consent, and in fact codified the use of personal medical data in a wide range of business practices under the guise of permitted “health care operations.” Many patients are not presented with a HIPAA notice but instead are asked to sign a combined notice and waiver that adds consents for a variety of business activities designed to benefit the provider, not the patient. In this climate, patients have been outraged to receive solicitations for purchases ranging from drugs to burial plots, while at the same time receiving care which is too often uncoordinated and unsafe. It is no wonder that many Americans take a circumspect view of health IT.
Privacy law’s consent paradigm means that, generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. Some of the brightest minds in cyberlaw have focused on innovation designed to enable such self-protection. For instance, interdisciplinary research groups have proposed “personal data vaults” to manage the emanations of sensor networks. Jonathan Zittrain’s article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
If individuals had enough time to manage their personal data the way they manage their checkbooks and gardens, perhaps the consent paradigm would be a good foundation for addressing public concerns about privacy. If applicants could easily bargain with would-be employers over privacy, or patients with hospitals, perhaps we could rely on them to protect their interests. But actual occurrences of such acts of self-assertion and self-protection are rare. Given the frequently abstract benefits that privacy and reputational integrity afford, they are often traded away for competitive economic advantage. This process further erodes societal expectations of privacy.
A collective commitment to privacy is far more valuable than a private, transactional approach that all but guarantees a race to the bottom. If such a collective commitment does not materialize, record systems will only deserve trust if they become as transparent as the patients and research subjects they profile. Given corporate assertion of trade secrecy (and even privacy rights), reciprocal transparency will not be easy to achieve. Nevertheless, repeated breaches, fraud, and data meltdowns in the US should provoke an alliance of socially responsible researchers to lobby the US government to set minimal standards of reciprocal transparency and auditing. Consumers can only trust innovators if they can understand what is being done with data. As we become “transparent citizens” (as Joel Reidenberg puts it), we should demand that the corporate, university, and governmental authors of that trend reciprocate, and become more open about the data they gather.
Fortunately, as a recent presentation by Deborah Peel reminded me, there is significant audit authority built into the recent HITECH act which may curb some abuses. Audits will become increasingly important as a “wild west” of health data is excavated by scrapers, marketers, and other data miners.
Consider, for instance, the following scenario: contributors to the medical website PatientsLikeMe.com found that “Nielsen Co., [a] media-research firm . . . was ‘scraping,’ or copying, every single message off PatientsLikeMe’s private online forums.” Had the virtual break-in not been detected, health attributes connected to usernames (which, in turn, can often be linked to real identities) could have spread into numerous databases. A reciprocal transparency paradigm would require all those harboring health data to have some certified indication of its legitimate provenance. Data would not be allowed to persist without certification of its provenance.
Unforeseen spread of inaccurate or inappropriate health data is not just a problem for those who want to avoid getting solicitations for burial plots after a sensitive appointment. Given law enforcement exceptions to medical privacy laws and regulations, it should come as little surprise that the government claims that “a 2005 law authorizes it to monitor and record all prescription drug use by all citizens via so-called “Prescription Drug Monitoring Programs.” Such programs may just be the tip of an iceberg of new domestic intelligence programs that rely on private companies to act as “big brother’s little helpers.”
Whenever health data is fed into an evaluative profile of an individual, there should be safeguards in place to assure that the data is accurate, and that the resulting profile is, if at all possible, not used to harm or disadvantage the individual. Without assurances like these, we can count on continued resistance to the development of health data infrastructures.