Filed under: Electronic Medical Records, IT, Medical Journals, Medical Malpractice
If one jumbo jet crashed in the US each day for a week, we’d expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that “If we could focus our efforts on just four key areas — failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism — and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year.” The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.
But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, “by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults.” While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is a cornerstone of intuitive learning. Parameswaram offers several passages from James Reason’s book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:
[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks.” In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.
A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. We will need to begin to recognize this regulatory apparatus as a “process of integrating human intelligence with artificial intelligence.” (For more on that front, the recent “We, Robot” conference at U. Miami is also of great interest.)
Another recent story emphasized the importance of filters in an era of information overload, and the need to develop better ways of processing complex information. Kerry Grens’s article “Data Diving” emphasizes that “what lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.”
[F]or the most part, [analysts] rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading. . . . [There is] an entire world of data that never sees the light of publication. “I have an evidence crisis,” [says Tom Jefferson of the Cochrane Collaboration]. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages. . . .
Clinical study reports . . . are the most comprehensive descriptions of trials’ methodology and results . . . . They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected. But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.
There will be a growing imperative to open up all of the data as concerns about the reliability of publications continue to grow.
Since the data breach notification regulations by HHS went into effect in September 2009, 385 incidents affecting 500 or more individuals have been reported to HHS, according to its website. A total of 19 million individuals have been affected by a large data breach since 2009. The regulations require a covered entity that discovers a reportable breach affecting 500 individuals or more to report the incident to the HHS Office of Civil Rights immediately. After an investigation, HHS publicly posts information about the reported incident on its website on what has become known as the “Wall of Shame.” Of the 385 reported incidents, there are six separate incidents each affecting a million individuals or more. In its 2011 annual report to Congress, HHS reported that in 2009 covered entities notified approximately 2.4 million individuals affected by a breach and 5.4 million individuals the following year. This number grew in 2011 and it will likely continue to grow in 2012. To date, the largest breach took place in October 2011 at Tricare, the health insurer of American military personnel, which affected 4,901,432 individuals after storage tapes containing protected health information (PHI) were stolen from a vehicle. These numbers are staggering, but fortunately more can be done and should be done to prevent data breaches.
Data breaches can cause great harm to the affected individuals, providers and institutions. Individuals may experience embarrassment and harassment because sensitive health information was released. Individuals are vulnerable to identity theft and financial fraud if personal information such as social security numbers were accessed. More frequently, institutions are offering credit monitoring services to affected individuals to monitor for potential fraud. Similarly, data breaches carry a very high cost for institutions that will have to spend great sums to investigate and report a breach to HHS, the media and the affected individuals. An institution or provider’s reputation can also be harmed through negative publicity and the loss of consumers. More institutions are hiring public relations teams after a breach to minimize the amount of fallout and negative publicity. The threat of litigation and class action lawsuits following a breach is also present and very real. Stanford Hospital, Tricare, and Sutter Health are all facing million and billion dollar class action lawsuits for their 2011 data breaches.
The bad news is that data breaches are impossible to predict and it is impossible to protect against every type of possible breach. Unfortunately, even the strongest policies, precautions and security measures cannot protect an entity from a hacker, thief or an employee or business associate’s honest mistake. As more providers and institutions adopt electronic health record systems and digitize their records, data breaches will continue to occur and large breaches will be spotlighted by the media. Pursuant to the regulations, a covered entity must alert a prominent media outlet if a reported breach affects more than 500 people of that state. Based on the events of last year alone, it is clear that the media loves to report on data breaches and will continue to do so. Hopefully this public exposure will serve to increase accountability to the public rather than instill fear in the public and hurt consumer confidence in the EHR movement.
The good news is that more can be done by providers and institutions to prevent harmful and costly data breaches. Data security and patient privacy should be the focus of the industry in the upcoming years because it is just as important as meaningful use certification. The benefits flowing from the Medicare incentive payments that an institution may receive under the Affordable Care Act can be canceled out in the event of a large and debilitating data breach. It would be wise for covered entities to focus on preventing data breaches as much as achieving meaningful use.
There is no easy solution to preventing breaches, but encryption is one surefire way an entity can better protect itself from a costly breach. As entities become more familiar with EHR systems and recognize the risks involved in storing and transferring PHI data, implementing encryption technology should become a top priority for each entity.
Encryption of PHI is a major step a provider or institution can take to secure its sensitive patient data. Encryption is the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key. According to a Guidance from HHS, if an entity encrypts its data in accordance with the National Institute of Standards and Technology standards for encryption, then any breach of the encrypted data falls within a safe harbor and does not have to be reported. This is an incredibly important safe harbor that could save an entity a lot of money. It is shocking that more entities, especially those with the means and resources to install a qualifying encryption system, do not utilize encryption technology on any of their electronic devices, especially portable devices.
Of the 385 reported breach incidents, thirty-nine percent involved a lost or stolen laptop or other portable media device containing unencrypted PHI. A report recently released by Redspin, an IT security firm, states that data breaches stemming from employees losing unencrypted devices spiked 525 percent in the last year alone. This statistic confirms that devices, including laptops, tablets and smartphones, pose a very high risk for a data breach. Redspin reported that eighty-one percent of healthcare organizations now use smartphones, iPads, and other tablets, but forty-nine percent of respondents in a recent healthcare IT poll by the Ponemon Institute said that nothing was being done to protect the data on those devices. At the very least, these reports and the statistics on HHS’s “Wall of Shame” should encourage entities to encrypt their portable electronic devices that contain sensitive PHI.
There are of course costs associated with adopting encryption technology in an EHR system. There are costs to install the system and maintain it with the help of an IT expert. Encryption of information can also slow down the processes used in sharing information. After all, one of the main goals of an EHR system is to make it easier for providers to share health information about their patients. An entity should work with an IT expert to determine what information should be encrypted in order to maximize the efficiencies of an EHR system. Despite the costs, the money and resources spent implementing encryption technology can be well worth it and are a smart investment for any entity with an EHR system. In a study published in 2011, the Ponemon Institute found that the cost of a data breach was $214 per compromised record and the average cost of a breach is $7.2 million. In light of the large data breaches that have been reported, it is clear that the costs of a breach can be much higher than the costs to implement encryption technology.
Under the HITECH Act and HHS’s interim final rule, encryption of health information is not mandatory. It remains to be seen whether HHS will impose a mandatory encryption policy on all devices or, at the very least, all portable devices capable of storing or transferring PHI, when it releases the final version of the data breach notification regulations sometime this year. The health care industry’s lack of encryption for patient information has drawn attention on Capitol Hill. At a November 2011 hearing before the Senate Judiciary Committee’s panel on Privacy, Technology and Law, Deven McGraw of the Center for Democracy and Technology testified that “we know from the statistics on breaches that have occurred since the notification provisions went into effect in 2009 that the healthcare industry appears to be rarely encrypting data.” At the hearing, Senator Tom Coburn, a physician himself, and Senator Al Franken, the chair of the panel, both voiced their concern over patient privacy protection and the current regulatory scheme. Senator Franken has said that he is contemplating legislation to encourage encryption by providers, although no action has been taken.
In the interim, it is reasonably clear that most, if not all, entities can benefit from implementing encryption technology when considering the costs and headaches associated with a data breach. When encryption is done properly, it has the potential of saving an entity a large sum of money, perhaps millions of dollars, in costs and fines — and that should be reason enough for entities to start taking this step in EHR technology.
Last year I published a piece called “Beyond Innovation and Competition,” questioning the dominance of those values. Economists celebrate innovation and competition as the main source of future growth. Innovation has become the central focus of Internet law and policy. While leading commentators sharply divide on the best way to promote innovation, they routinely elevate its importance. Business writers have celebrated search engines, social networks, and tech startups as model corporations, bringing creative destruction and “disruptive innovation” in their wake. Maximum innovation is the goal, and competition is billed as the best way of achieving it. Players in the vast and dynamic tech marketplace are supposed to constantly strive to innovate in order to attract consumers away from rivals.
In the piece, I explain how both competition and innovation can be as destructive as they are constructive. There are many social values (including privacy, transparency, predictability, and stability), and companies can compete for profits in ways that erode those values. In an era of inequality and hall-of-mirrors stock market valuations, innovations of marginal or negative impact on society at large can be vastly overvalued by a stampede of fickle investors.
The shortcomings of the innovation and competition story also play out in health information technology. Stimulus legislation in 2009 provided many carrots and sticks for doctors to digitize their recordkeeping systems, ranging from bonuses now to reimbursement haircuts later this decade if they fail to implement the technology. Congress structured the incentives to encourage a competitive and innovative marketplace in health information technology. But many doctors are shying away from implementation, in part because they fear that the fast and loose ethics of the market can’t mesh with a medical culture of constant commitment to quality care.
Susan Jaffe’s article for the Center for Public Integrity examines doctors’ fears about adopting any given software suite. According to Jaffe, “570 different electronic health systems certified by private organizations for non-hospital settings may be used to qualify for the” stimulus funds. The long-term consequences of the choice make the jam-shopping examples in Barry Schwartz’s book The Paradox of Choice seem quaint:
The systems can vary in appearance, content, organization and special features. Some can be customized by users in different ways, at no cost or some cost, or not at all. Some are compatible with other systems now, eventually or, some critics say, maybe never. . . . The costs of the systems remain daunting, despite the bonuses, particularly in areas that have been hit hard by an ailing economy.
The pricetag varies widely depending on the type and size of the medical practice, whether new computers are purchased and the extent of customization, among other things. Software alone can cost from $2,000 to $10,000 per doctor. All told, the cost jumps to about roughly $20,000 per doctor, according to a regional extension center consultant who advises physicians in northeast Ohio. On top of that, manufacturers charge hefty annual fees for technical support and periodic upgrades that together can amount to about 35 percent of the upfront costs. The systems are priced in a way that does not make comparison shopping “easy or necessarily valid,” said Dottie Howe, a spokeswoman for the Ohio regional extension center. There is no basic price because each company offers different components, features, options, and level of technical support. . . .
Most manufacturers will also charge the doctors to move the information in their current system to the new one. There could be extra [ongoing, monthly] charges to connect to other systems too.
Doctors have also been burned by sharp operators that emphasize slick salesmanship over solid service:
[T]he Southwest Family Physicians group is worried . . . They bought an electronic health record system five years ago that is now nearly obsolete. The manufacturer was taken over by another company that provides minimal technical support . . . “The salesman said ‘you’re buying a Cadillac, this is going to be the greatest thing,’ ” [one doctor] recalled. But that system can’t display an X-Ray image or send a prescription electronically to a pharmacy. “We’ve got the Model T Ford,” he said.
It does appear that regional extension centers are doing some work to keep pricing reasonable. Jaffe’s article focuses on Ohio, where five “preferred vendors” “agreed to charge prices ‘as good as or better than’ prices offered to other regional extension centers, to provide onsite assistance when a practice turns on its electronic health record system for the first time, offer technical support for at least six years, and limit annual cost increases for continuing technical support, among other things.” But consider the bizarrely proprietary nature of pricing data:
Whether the five preferred vendors offer a better deal than their non-preferred competitors is not known because the state regional extension center doesn’t have pricing information from non-preferred vendors, said Howe, the spokeswoman for the state’s regional extension center. Pricing from the preferred vendors are confidential, she said. And despite their preferred status, the five companies do not guarantee that eligible health care providers who purchase their systems will receive the government’s bonus payments.
I discussed the troubling degree of secrecy in health care before, and I’m very sad to see it persist here. The doctors in Jaffe’s story are making reasonable demands: to be able to understand the nature of the commitment they are making, to avoid big financial losses, and not to be burned by fly-by-night operators attracted only by the government subsidy money. They want to assure that the basic health care values of access, cost-control, and quality are reflected in the software they use.
We are seeing the opening stages of a battle between a medical sector committed to maintaining its own autonomy and traditions, and a tech sector that wants to commoditize health data in as standardized a form as futures markets homogenized corn grades, or credit scores tranched residential mortgage backed securities. Commenting on the demise of Google Health, an informatics expert said that “Google is unwilling, for perfectly good business reasons, to engage in block-by-block market solutions to health-care institutions one by one, and expecting patients to actually do data entry is not a scalable and workable solution.” To be sure, the company can’t expect to make the same profit margins in the health sector as it does in the online ad business. But the “instant millions” ethos of Silicon Valley doesn’t fit well with a sector where we are in principle committed to serving everyone, regardless of ability to pay.
Economist John Van Reenen has observed that the US has a particularly innovative economy in part because our markets are so good at crushing badly run firms. It’s probably good that garden equipment suppliers, toothpaste makers, and pie bakers know they can be out of business in a month or two if they’re “off their game” for a short time. But if I just entrusted three years of medical records to a vendor who suddenly went out of business, I’d take little comfort in the idea that a marginally better competitor had knocked it out of the market. The transition to a new vendor can be slow and costly—doctors in Jaffe’s story speak of seeing 1/3 to 1/2 less patients over weeks or months as they learn a new system.
At a Yale SOM Health Care conference in 2009, the Chief Medical Officer of a major player in the field once remarked to me that choosing an HIT vendor is “like a marriage—you don’t end the relationship lightly.” I first thought that remark was self-serving. But the more one examines the HIT field, the more important it appears to get standard recordkeeping, support capabilities, and interoperability right at the outset, rather than leaving doctors to negotiate the wreckage of several generations of battling systems. Think about how chaotic online music sales seemed before iTunes. Perhaps Apple (whose iPads are already beloved by many docs) is going to bring a swift and highly profitable order to this field, too. I hope the ONC and other decisionmakers will well-regulate whatever behemoth eventually emerges, vindicating the public values that competition and innovation are unlikely to promote.
Photo credits to Aleksandar Šušnjar, Jakub Halun and loki11.
I look forward to reconnecting with everyone who is attending the health law professors conference in Chicago. My presentation will be applying some of the ideas of Scott Peppet (on self-quantification and unraveling) to personal health records. I found these ideas from Peppet’s post on biometric identification particularly interesting:
The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. . . . [T]he company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)
I’ve previously looked at the limits of individualist accounts of autonomy in work on pharmaceuticals (here and here), and scholars like Robert Ahdieh are questioning individualism in law & economics generally. As Nic Terry has argued, many of the critiques of CDHC apply to PHRs, and vice versa.
As of a few years ago, “it wasn’t illegal to hire and fire people based on their smoking habits” in 21 states. I think there will be many difficult questions raised in coming years by the growth of medical records of all types, and how many secondary uses of them are permitted. For example, some dating sites will now verify the income and assets of their users. How soon before they (and other certification and evaluation intermediaries) start vouching for health profiles? Does law have a role in these situations? I’ll try to explore these questions, and I’ll post more details about the presentation after getting some feedback.
The Washington Post recently featured Lena Sun’s reporting on why many physicians are wary of adopting an electronic medical records system. As noted in the piece,
Many are aware that beginning this year, health-care professionals who effectively use electronic records can each receive up to $44,000 over five years through Medicare or up to $63,750 over six years through Medicaid. But to qualify, doctors must meet a host of strict criteria, including regularly using computerized records to log diagnoses and visits, ordering prescriptions and monitoring for drug interactions. And starting in 2015, those who aren’t digital risk having their Medicare reimbursements cut.
Deven McGraw, director of the health privacy project at the Center for Democracy & Technology, complains that, despite all these requirements, patient confidentiality concerns are being neglected:
But no federal regulations clearly require that doctors turn the data encryption on or prevent those who don’t do so from getting paid. . . . “This is a point of frustration,” said McGraw, who sits on an advisory group that sought unsuccessfully to prevent those who violate privacy regulations of the federal Health Insurance Portability and Accountability Act, or HIPAA, from getting incentive money.
Some older doctors may find it easier to retire than to get on board with new EMR systems. We frequently hear complaints about Luddite doctors resisting technology that has long been adopted by other sectors. But, as one commentator recently insisted, a doctor is not a bank. To get a sense of how frustrated doctors can become because of the new health IT (and the legal contracts that accompany it), check out this parody website for the faux firm Extormity. It announces a memorable experience for doctor clients/conscripts:
At the confluence of extortion and conformity lies Extormity, the electronic health records mega-corporation dedicated to offering highly proprietary, difficult to customize and prohibitively expensive healthcare IT solutions. Our flagship product, the Extormity EMR Software Suite, was recently voted “Most Complex” by readers of a leading healthcare industry publication.
I loved this description of a firm committed to maximizing the value of it’s intellectual property:
The Extormity EMR Software Suite is built on a proprietary software model renowned for its complexity. This proprietary platform and all of its components must be procured and implemented as a complete package we call the Extormity BundleTM (which describes both our comprehensive package and its associated cost).
Operating the Extormity Bundle requires a phalanx of servers, which of course need to be replicated for redundancy. Fortunately, Extormity acts as a value-added reseller of these servers, which we pre-load with operating software. This allows us to mark-up the cost of the servers and charge for server configuration. In addition, the server software carries with it steep annual license fees.
Let’s hope the ONC’s ongoing regulatory process can help reduce the risk of Extormity-style raw deals for doctors. Given the recent flap over the FDA’s effective imprimatur for an extreme drug price increase, no DC agency should set in motion a process that could lead to prohibitively expensive fees for an essential aspect of health care.
X-Posted: Health Law Prof Blog.