Filed under: Electronic Medical Records, IT, Medical Journals, Medical Malpractice
If one jumbo jet crashed in the US each day for a week, we’d expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that “If we could focus our efforts on just four key areas — failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism — and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year.” The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.
But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, “by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults.” While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is a cornerstone of intuitive learning. Parameswaram offers several passages from James Reason’s book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:
[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks.” In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.
A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. We will need to begin to recognize this regulatory apparatus as a “process of integrating human intelligence with artificial intelligence.” (For more on that front, the recent “We, Robot” conference at U. Miami is also of great interest.)
Another recent story emphasized the importance of filters in an era of information overload, and the need to develop better ways of processing complex information. Kerry Grens’s article “Data Diving” emphasizes that “what lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.”
[F]or the most part, [analysts] rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading. . . . [There is] an entire world of data that never sees the light of publication. “I have an evidence crisis,” [says Tom Jefferson of the Cochrane Collaboration]. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages. . . .
Clinical study reports . . . are the most comprehensive descriptions of trials’ methodology and results . . . . They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected. But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.
There will be a growing imperative to open up all of the data as concerns about the reliability of publications continue to grow.
Since the data breach notification regulations by HHS went into effect in September 2009, 385 incidents affecting 500 or more individuals have been reported to HHS, according to its website. A total of 19 million individuals have been affected by a large data breach since 2009. The regulations require a covered entity that discovers a reportable breach affecting 500 individuals or more to report the incident to the HHS Office of Civil Rights immediately. After an investigation, HHS publicly posts information about the reported incident on its website on what has become known as the “Wall of Shame.” Of the 385 reported incidents, there are six separate incidents each affecting a million individuals or more. In its 2011 annual report to Congress, HHS reported that in 2009 covered entities notified approximately 2.4 million individuals affected by a breach and 5.4 million individuals the following year. This number grew in 2011 and it will likely continue to grow in 2012. To date, the largest breach took place in October 2011 at Tricare, the health insurer of American military personnel, which affected 4,901,432 individuals after storage tapes containing protected health information (PHI) were stolen from a vehicle. These numbers are staggering, but fortunately more can be done and should be done to prevent data breaches.
Data breaches can cause great harm to the affected individuals, providers and institutions. Individuals may experience embarrassment and harassment because sensitive health information was released. Individuals are vulnerable to identity theft and financial fraud if personal information such as social security numbers were accessed. More frequently, institutions are offering credit monitoring services to affected individuals to monitor for potential fraud. Similarly, data breaches carry a very high cost for institutions that will have to spend great sums to investigate and report a breach to HHS, the media and the affected individuals. An institution or provider’s reputation can also be harmed through negative publicity and the loss of consumers. More institutions are hiring public relations teams after a breach to minimize the amount of fallout and negative publicity. The threat of litigation and class action lawsuits following a breach is also present and very real. Stanford Hospital, Tricare, and Sutter Health are all facing million and billion dollar class action lawsuits for their 2011 data breaches.
The bad news is that data breaches are impossible to predict and it is impossible to protect against every type of possible breach. Unfortunately, even the strongest policies, precautions and security measures cannot protect an entity from a hacker, thief or an employee or business associate’s honest mistake. As more providers and institutions adopt electronic health record systems and digitize their records, data breaches will continue to occur and large breaches will be spotlighted by the media. Pursuant to the regulations, a covered entity must alert a prominent media outlet if a reported breach affects more than 500 people of that state. Based on the events of last year alone, it is clear that the media loves to report on data breaches and will continue to do so. Hopefully this public exposure will serve to increase accountability to the public rather than instill fear in the public and hurt consumer confidence in the EHR movement.
The good news is that more can be done by providers and institutions to prevent harmful and costly data breaches. Data security and patient privacy should be the focus of the industry in the upcoming years because it is just as important as meaningful use certification. The benefits flowing from the Medicare incentive payments that an institution may receive under the Affordable Care Act can be canceled out in the event of a large and debilitating data breach. It would be wise for covered entities to focus on preventing data breaches as much as achieving meaningful use.
There is no easy solution to preventing breaches, but encryption is one surefire way an entity can better protect itself from a costly breach. As entities become more familiar with EHR systems and recognize the risks involved in storing and transferring PHI data, implementing encryption technology should become a top priority for each entity.
Encryption of PHI is a major step a provider or institution can take to secure its sensitive patient data. Encryption is the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key. According to a Guidance from HHS, if an entity encrypts its data in accordance with the National Institute of Standards and Technology standards for encryption, then any breach of the encrypted data falls within a safe harbor and does not have to be reported. This is an incredibly important safe harbor that could save an entity a lot of money. It is shocking that more entities, especially those with the means and resources to install a qualifying encryption system, do not utilize encryption technology on any of their electronic devices, especially portable devices.
Of the 385 reported breach incidents, thirty-nine percent involved a lost or stolen laptop or other portable media device containing unencrypted PHI. A report recently released by Redspin, an IT security firm, states that data breaches stemming from employees losing unencrypted devices spiked 525 percent in the last year alone. This statistic confirms that devices, including laptops, tablets and smartphones, pose a very high risk for a data breach. Redspin reported that eighty-one percent of healthcare organizations now use smartphones, iPads, and other tablets, but forty-nine percent of respondents in a recent healthcare IT poll by the Ponemon Institute said that nothing was being done to protect the data on those devices. At the very least, these reports and the statistics on HHS’s “Wall of Shame” should encourage entities to encrypt their portable electronic devices that contain sensitive PHI.
There are of course costs associated with adopting encryption technology in an EHR system. There are costs to install the system and maintain it with the help of an IT expert. Encryption of information can also slow down the processes used in sharing information. After all, one of the main goals of an EHR system is to make it easier for providers to share health information about their patients. An entity should work with an IT expert to determine what information should be encrypted in order to maximize the efficiencies of an EHR system. Despite the costs, the money and resources spent implementing encryption technology can be well worth it and are a smart investment for any entity with an EHR system. In a study published in 2011, the Ponemon Institute found that the cost of a data breach was $214 per compromised record and the average cost of a breach is $7.2 million. In light of the large data breaches that have been reported, it is clear that the costs of a breach can be much higher than the costs to implement encryption technology.
Under the HITECH Act and HHS’s interim final rule, encryption of health information is not mandatory. It remains to be seen whether HHS will impose a mandatory encryption policy on all devices or, at the very least, all portable devices capable of storing or transferring PHI, when it releases the final version of the data breach notification regulations sometime this year. The health care industry’s lack of encryption for patient information has drawn attention on Capitol Hill. At a November 2011 hearing before the Senate Judiciary Committee’s panel on Privacy, Technology and Law, Deven McGraw of the Center for Democracy and Technology testified that “we know from the statistics on breaches that have occurred since the notification provisions went into effect in 2009 that the healthcare industry appears to be rarely encrypting data.” At the hearing, Senator Tom Coburn, a physician himself, and Senator Al Franken, the chair of the panel, both voiced their concern over patient privacy protection and the current regulatory scheme. Senator Franken has said that he is contemplating legislation to encourage encryption by providers, although no action has been taken.
In the interim, it is reasonably clear that most, if not all, entities can benefit from implementing encryption technology when considering the costs and headaches associated with a data breach. When encryption is done properly, it has the potential of saving an entity a large sum of money, perhaps millions of dollars, in costs and fines — and that should be reason enough for entities to start taking this step in EHR technology.
Last year I published a piece called “Beyond Innovation and Competition,” questioning the dominance of those values. Economists celebrate innovation and competition as the main source of future growth. Innovation has become the central focus of Internet law and policy. While leading commentators sharply divide on the best way to promote innovation, they routinely elevate its importance. Business writers have celebrated search engines, social networks, and tech startups as model corporations, bringing creative destruction and “disruptive innovation” in their wake. Maximum innovation is the goal, and competition is billed as the best way of achieving it. Players in the vast and dynamic tech marketplace are supposed to constantly strive to innovate in order to attract consumers away from rivals.
In the piece, I explain how both competition and innovation can be as destructive as they are constructive. There are many social values (including privacy, transparency, predictability, and stability), and companies can compete for profits in ways that erode those values. In an era of inequality and hall-of-mirrors stock market valuations, innovations of marginal or negative impact on society at large can be vastly overvalued by a stampede of fickle investors.
The shortcomings of the innovation and competition story also play out in health information technology. Stimulus legislation in 2009 provided many carrots and sticks for doctors to digitize their recordkeeping systems, ranging from bonuses now to reimbursement haircuts later this decade if they fail to implement the technology. Congress structured the incentives to encourage a competitive and innovative marketplace in health information technology. But many doctors are shying away from implementation, in part because they fear that the fast and loose ethics of the market can’t mesh with a medical culture of constant commitment to quality care.
Susan Jaffe’s article for the Center for Public Integrity examines doctors’ fears about adopting any given software suite. According to Jaffe, “570 different electronic health systems certified by private organizations for non-hospital settings may be used to qualify for the” stimulus funds. The long-term consequences of the choice make the jam-shopping examples in Barry Schwartz’s book The Paradox of Choice seem quaint:
The systems can vary in appearance, content, organization and special features. Some can be customized by users in different ways, at no cost or some cost, or not at all. Some are compatible with other systems now, eventually or, some critics say, maybe never. . . . The costs of the systems remain daunting, despite the bonuses, particularly in areas that have been hit hard by an ailing economy.
The pricetag varies widely depending on the type and size of the medical practice, whether new computers are purchased and the extent of customization, among other things. Software alone can cost from $2,000 to $10,000 per doctor. All told, the cost jumps to about roughly $20,000 per doctor, according to a regional extension center consultant who advises physicians in northeast Ohio. On top of that, manufacturers charge hefty annual fees for technical support and periodic upgrades that together can amount to about 35 percent of the upfront costs. The systems are priced in a way that does not make comparison shopping “easy or necessarily valid,” said Dottie Howe, a spokeswoman for the Ohio regional extension center. There is no basic price because each company offers different components, features, options, and level of technical support. . . .
Most manufacturers will also charge the doctors to move the information in their current system to the new one. There could be extra [ongoing, monthly] charges to connect to other systems too.
Doctors have also been burned by sharp operators that emphasize slick salesmanship over solid service:
[T]he Southwest Family Physicians group is worried . . . They bought an electronic health record system five years ago that is now nearly obsolete. The manufacturer was taken over by another company that provides minimal technical support . . . “The salesman said ‘you’re buying a Cadillac, this is going to be the greatest thing,’ ” [one doctor] recalled. But that system can’t display an X-Ray image or send a prescription electronically to a pharmacy. “We’ve got the Model T Ford,” he said.
It does appear that regional extension centers are doing some work to keep pricing reasonable. Jaffe’s article focuses on Ohio, where five “preferred vendors” “agreed to charge prices ‘as good as or better than’ prices offered to other regional extension centers, to provide onsite assistance when a practice turns on its electronic health record system for the first time, offer technical support for at least six years, and limit annual cost increases for continuing technical support, among other things.” But consider the bizarrely proprietary nature of pricing data:
Whether the five preferred vendors offer a better deal than their non-preferred competitors is not known because the state regional extension center doesn’t have pricing information from non-preferred vendors, said Howe, the spokeswoman for the state’s regional extension center. Pricing from the preferred vendors are confidential, she said. And despite their preferred status, the five companies do not guarantee that eligible health care providers who purchase their systems will receive the government’s bonus payments.
I discussed the troubling degree of secrecy in health care before, and I’m very sad to see it persist here. The doctors in Jaffe’s story are making reasonable demands: to be able to understand the nature of the commitment they are making, to avoid big financial losses, and not to be burned by fly-by-night operators attracted only by the government subsidy money. They want to assure that the basic health care values of access, cost-control, and quality are reflected in the software they use.
We are seeing the opening stages of a battle between a medical sector committed to maintaining its own autonomy and traditions, and a tech sector that wants to commoditize health data in as standardized a form as futures markets homogenized corn grades, or credit scores tranched residential mortgage backed securities. Commenting on the demise of Google Health, an informatics expert said that “Google is unwilling, for perfectly good business reasons, to engage in block-by-block market solutions to health-care institutions one by one, and expecting patients to actually do data entry is not a scalable and workable solution.” To be sure, the company can’t expect to make the same profit margins in the health sector as it does in the online ad business. But the “instant millions” ethos of Silicon Valley doesn’t fit well with a sector where we are in principle committed to serving everyone, regardless of ability to pay.
Economist John Van Reenen has observed that the US has a particularly innovative economy in part because our markets are so good at crushing badly run firms. It’s probably good that garden equipment suppliers, toothpaste makers, and pie bakers know they can be out of business in a month or two if they’re “off their game” for a short time. But if I just entrusted three years of medical records to a vendor who suddenly went out of business, I’d take little comfort in the idea that a marginally better competitor had knocked it out of the market. The transition to a new vendor can be slow and costly—doctors in Jaffe’s story speak of seeing 1/3 to 1/2 less patients over weeks or months as they learn a new system.
At a Yale SOM Health Care conference in 2009, the Chief Medical Officer of a major player in the field once remarked to me that choosing an HIT vendor is “like a marriage—you don’t end the relationship lightly.” I first thought that remark was self-serving. But the more one examines the HIT field, the more important it appears to get standard recordkeeping, support capabilities, and interoperability right at the outset, rather than leaving doctors to negotiate the wreckage of several generations of battling systems. Think about how chaotic online music sales seemed before iTunes. Perhaps Apple (whose iPads are already beloved by many docs) is going to bring a swift and highly profitable order to this field, too. I hope the ONC and other decisionmakers will well-regulate whatever behemoth eventually emerges, vindicating the public values that competition and innovation are unlikely to promote.
Photo credits to Aleksandar Šušnjar, Jakub Halun and loki11.
I look forward to reconnecting with everyone who is attending the health law professors conference in Chicago. My presentation will be applying some of the ideas of Scott Peppet (on self-quantification and unraveling) to personal health records. I found these ideas from Peppet’s post on biometric identification particularly interesting:
The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. . . . [T]he company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)
I’ve previously looked at the limits of individualist accounts of autonomy in work on pharmaceuticals (here and here), and scholars like Robert Ahdieh are questioning individualism in law & economics generally. As Nic Terry has argued, many of the critiques of CDHC apply to PHRs, and vice versa.
As of a few years ago, “it wasn’t illegal to hire and fire people based on their smoking habits” in 21 states. I think there will be many difficult questions raised in coming years by the growth of medical records of all types, and how many secondary uses of them are permitted. For example, some dating sites will now verify the income and assets of their users. How soon before they (and other certification and evaluation intermediaries) start vouching for health profiles? Does law have a role in these situations? I’ll try to explore these questions, and I’ll post more details about the presentation after getting some feedback.
The Washington Post recently featured Lena Sun’s reporting on why many physicians are wary of adopting an electronic medical records system. As noted in the piece,
Many are aware that beginning this year, health-care professionals who effectively use electronic records can each receive up to $44,000 over five years through Medicare or up to $63,750 over six years through Medicaid. But to qualify, doctors must meet a host of strict criteria, including regularly using computerized records to log diagnoses and visits, ordering prescriptions and monitoring for drug interactions. And starting in 2015, those who aren’t digital risk having their Medicare reimbursements cut.
Deven McGraw, director of the health privacy project at the Center for Democracy & Technology, complains that, despite all these requirements, patient confidentiality concerns are being neglected:
But no federal regulations clearly require that doctors turn the data encryption on or prevent those who don’t do so from getting paid. . . . “This is a point of frustration,” said McGraw, who sits on an advisory group that sought unsuccessfully to prevent those who violate privacy regulations of the federal Health Insurance Portability and Accountability Act, or HIPAA, from getting incentive money.
Some older doctors may find it easier to retire than to get on board with new EMR systems. We frequently hear complaints about Luddite doctors resisting technology that has long been adopted by other sectors. But, as one commentator recently insisted, a doctor is not a bank. To get a sense of how frustrated doctors can become because of the new health IT (and the legal contracts that accompany it), check out this parody website for the faux firm Extormity. It announces a memorable experience for doctor clients/conscripts:
At the confluence of extortion and conformity lies Extormity, the electronic health records mega-corporation dedicated to offering highly proprietary, difficult to customize and prohibitively expensive healthcare IT solutions. Our flagship product, the Extormity EMR Software Suite, was recently voted “Most Complex” by readers of a leading healthcare industry publication.
I loved this description of a firm committed to maximizing the value of it’s intellectual property:
The Extormity EMR Software Suite is built on a proprietary software model renowned for its complexity. This proprietary platform and all of its components must be procured and implemented as a complete package we call the Extormity BundleTM (which describes both our comprehensive package and its associated cost).
Operating the Extormity Bundle requires a phalanx of servers, which of course need to be replicated for redundancy. Fortunately, Extormity acts as a value-added reseller of these servers, which we pre-load with operating software. This allows us to mark-up the cost of the servers and charge for server configuration. In addition, the server software carries with it steep annual license fees.
Let’s hope the ONC’s ongoing regulatory process can help reduce the risk of Extormity-style raw deals for doctors. Given the recent flap over the FDA’s effective imprimatur for an extreme drug price increase, no DC agency should set in motion a process that could lead to prohibitively expensive fees for an essential aspect of health care.
X-Posted: Health Law Prof Blog.
Filed under: Electronic Medical Records, Private Insurance
As ACA implementation lumbers ahead, and challenges to it slouch toward the Supremes, the U.S. health care system’s arbitrary old ways continue to mystify and frustrate. Consider this story on one person’s quest to obtain insurance:
Most employees assume that if they lose their job and the health coverage that comes along with it, they’ll be able to purchase insurance somewhere. . . .My husband, teenage daughter and I were all active and healthy, and I naïvely thought getting health insurance would be simple. . . .
Then the first letter arrived — denied. . . .What were these pre-existing conditions that put us into high-risk categories? For me, it was a corn on my toe for which my podiatrist had recommended an in-office procedure. My daughter was denied because she takes regular medication for a common teenage issue. My husband was denied because his ophthalmologist had identified a slow-growing cataract. Basically, if there is any possible procedure in your future, insurers will deny you. . . .
As I filled out more applications, I discovered a critical error in my strategy. The first question was “Have you ever been denied health insurance”? Now my answer was yes, giving the new companies reason to be wary of my application. I learned too late that the best tactic is to apply simultaneously to as many companies as possible, so that you don’t have to admit to a denial.
As was recently reported, “50 to 129 million (19 to 50 percent of) non-elderly Americans have some type of pre-existing health condition.” The “health care market” is sending a strong signal: don’t step out of the system if you have any continuing need for even minor care.
But what’s more worrisome are the types of information circulating about you that you aren’t even aware of. Consider this story from Businessweek about the profiling of insurance applicants by third-party intermediaries:
Most consumers and even many insurance agents are unaware that Humana, UnitedHealth Group , Aetna (AET), Blue Cross plans, and other insurance giants have ready access to applicants’ prescription histories. These online reports, available in seconds from a pair of little-known intermediary companies at a cost of only about $15 per search, typically include voluminous information going back five years on dosage, refills, and possible medical conditions. The reports also provide a numerical score predicting what a person may cost an insurer in the future. . . .
[A] 57-year-old safety consultant in the oil and gas industry, says he tried to explain that the medications weren’t for serious ailments. The blood-pressure prescription related to a minor problem his wife, Paula, had with swelling of her ankles. The antidepressant was prescribed to help her sleep—a common “off-label” treatment doctors advise for some menopausal women. But drugs for depression and other mental health conditions are often red flags to insurers. Despite his efforts to reassure Humana, the phone interview with the company representative “just went south,” Walter recounts. He and his wife remain uninsured [as of 2008].
Health-related data from a wild west of unregulated intermediaries may spread to employers and other decisionmakers, just as credit scores have migrated from the bank context to influencing insurance pricing, and credit histories now influence employers. Sharona Hoffman has observed that “It is not uncommon for employers to obtain applicants’ and employees’ medical records. According to one source, every year, over ten million authorizations for release of medical information are signed by workers prior to the commencement of employment.” She has predicted disturbing possibilities arising out of that access to data:
Existing laws, including the ADA, GINA, HIPAA, and their state counterparts, provide important assurances to applicants and employees but are insufficient to guarantee that they will suffer no ill consequences as a result of EHR disclosure to employers. Employees may be especially concerned in times of recession, knowing that financial pressures make workers with health problems particularly unattractive to employers. Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers. In addition, in times of financial difficulty, limited resources may be available to implement technology and policies that will secure EHR confidentiality.
Secondary uses of health data could be a very lucrative niche for profilers of the future.
Given these possibilities, individuals should at least have the right to access and correct the health data that intermediaries have compiled about them. The FTC recognized this right, and “forced the [insurance] industry to begin disclosing the use of prescription information under . . . the Fair Credit Reporting Act. . . . Copies of prescription reports are supposed to be available to consumers at no charge under federal law.” This is a small step forward. But if the “scores” assessing individual risk are compiled according to proprietary algorithms, the consumer may still feel “in the dark,” unable to adequately influence the presentation of herself to the insurer.
As Esther Dyson has stated in another context, mysterious data flows can jeopardize individual autonomy:
The comforting thing about the kind of data that Facebook primarily deals with is that it’s public. If your friends and other people can see it, so can you.
More troubling is the data you don’t even know about – the kind of data about your online activities collected by ad networks and shared with advertisers and other marketers, and sometimes correlated with offline data from other vendors. By and large, that’s information you can’t see – what you clicked on, what you searched for, which pages you came from and went to – and neither can your friends, for the most part. But that information is sold and traded, manipulated with algorithms to classify you and to determine what ads you see, what e-mails you receive, and often what offers are made to you. Of course, some of that information could go astray.
Online advertisers already slice and dice population segments (and distribute opportunities & exposure to ads) via marketing discrimination. Will the “e-health revolution” bring their methods out of cyberspace, and into the deadly serious business of offering employment and insurance based on estimates of health status that applicants can’t understand or challenge?
Filed under: Electronic Medical Records, EMR, Private Insurance
[Ed. note: This is the second part (perhaps evident from the title) of a two part post. Though each could well stand on its own, the first part can be found here.]
Insurance Reporting and Classification
Reporting requirements may not seem like a notable accomplishment. Nevertheless, the trend toward monitoring the products and services offered by insurance companies is an important step toward accountability. HHS needs to impose some order, some translatable logic, on fields that have threatened to become enormously parasitic and unproductive by or masking the true nature of their commitments.
Consider the practical illegibility of the average insurance plan. A vanishingly small number of subscribers actually read such plans. A plan may have complex cost-sharing requirements that vary among in-network and out-of-network primary care doctors, specialists, surgeons, hospitals, and procedures. While a “great risk shift” makes consumers all the more responsible for their choices in health care, it’s hard to imagine anyone accurately mapping the true fiscal consequences of given disease episodes in an aggressively complex plan.
By setting “a minimum level of health benefits, called the essential health benefits, that must be offered by certain health plans.” As Jessica Mantel explains, the term “‘essential health benefits package’ means coverage that not only provides for the essential health benefits defined by the secretary, but also limits cost-sharing for coverage of the essential health benefits in accordance with the parameters specified in the statute.” The Cancer Action Network has applauded the ACA for promoting “more standardization in the scope and value of private health insurance coverage available.”
Medical loss ratios have long been of interest primarily to investors. An insurer that could achieve a low MLR by holding down expenditures on health care for its enrollees was a good investment. . . . On November 22, 2010, the Department of Health and Human Services released its interim final rule implementing the requirements of the new section 2718 of the Public Health Services Act (added by section 10101 of the Affordable Care Act), entitled, “Bringing Down the Cost of Health Care Coverage.” This provision is usually referred to as the “medical loss ratio” (or MLR) requirement . . .
Section 2718 requires health insurers (including grandfathered but not self-insured plans) to report to HHS each year, the percentage of their premium revenue that the insurer spends on 1) clinical services for enrollees, 2) “activities that improve health care quality,” and 3) all other non-claims costs, excluding federal and state taxes and licensing or regulatory fees. . . .
Jost describes in details how the classification works, and how it is designed to encourage more responsible insurer behavior.
Setting a Standard for Electronic Medical Records
Electronic health records systems will also need to develop shared data management standards. EMR vendors long argued that they needed flexibility to innovate in order to best reflect doctors’ practices and improve the capture of medical information. However, there is a tension between untrammeled innovation by vendors at any given time and later, predictable needs of patients, doctors, insurers, and hospitals to compare their records and to transport information from one filing system to another.
One system may be able to understand “C,” “cgh,” or “koff” as “cough,” and may well code it in any way it chooses. But to integrate and to port data, all systems need to be able to translate a symptom into a commonly recognized code. Health care providers can only avoid getting “locked into” a system if they can transport their records from one vendor to another. Patients want their providers to seamlessly integrate records.
HHS rulemaking has lain a groundwork for this type of common language of medical recordkeeping. As Sharona Hoffman and Andy Podgurski explain,
To address this problem, it is necessary for all vendors to support what we will call a “common exchange representation” (“CER”) for EHRs. A CER is an artificial language for representing the information in EHRs, which has well defined syntax and semantics and is capable of unambiguously representing the information in any EHR from a typical EHR system. EHRs using the CER should be readily transmittable between EHR systems of different vendors. The CER should make it easy for vendors of EHR systems to implement a mechanism for translating accurately and efficiently between the CER and the system’s internal EHR format.
There are also important opportunities for standardization in the security field:
As is true for a common exchange format, standardized security policies and mechanisms are unlikely to be adopted by vendors and providers without a regulatory mandate. In order to facilitate compliance and provide vendors with clear guidance, the regulatory mandate might incorporate, by explicit reference, some established and emerging security standards, such as the Internet Engineering Task Force’s Transport Layer Security (“TLS”) standard or its Public-Key Infrastructure (X.509) standard.
The discussion can quickly become technical, and it is difficult to explore all the ins and outs of the process. But the underlying purpose is clear: to develop some standard forms of interacting in a realm where “spontaneous order” is unlikely to arise and “network power” could lead to lock-in.
Of course, there are important differences between the EHR and health insurance landscapes. Symptoms refer to conditions that are, by and large, objective. (One can even imagine ubiquitous video cameras and sensors creating something like a complete patient record (or medical life log) for patients who consent to that type of monitoring.) Insurance contracts, by contrast, do not have the same “ontological firmness.” They must contemplate vague and open-ended spells of illness.
Nevertheless, a process similar to common exchange representation is now going on in the consumer affairs office of HHS. As the Office of Consumer Information and Insurance Oversight lays ground rules for ACA implementation, it must decide on some basic questions: what counts as insurance? What is a deductible? The ultimate goal is to require insurers to convey with far more precision what services they truly cover. The health insurance and health IT landscapes will only become governable when practices are nameable, classifiable, and comparable.
X-Posted: Concurring Opinions.
Filed under: Electronic Medical Records, EMR, Private Insurance
I was recently listening to Health Affairs’s “Newsmaker Breakfast with Karen Pollitz.” She gave a fascinating presentation on the challenges she faces as she develops HealthCare.Gov as a portal for information about health insurance. As I noted a few years ago, health insurers can easily mislead consumers about the nature of their coverage, and disclosure charts can be very helpful.
But even disclosure charts run up against the slipperiness of language. Pollitz noted that for some plans, a “deductible” was not really a deductible; you could easily spend much more out-of-pocket on health care than the stated “deductible level” before coverage kicked in.
How can an individual make an informed choice when words lose their meaning in a tangle of qualifications and conditions? At what point does a deductible cease being a deductible? While this might seem like a relatively technical question of insurance regulation, it is reflects a more general information-gathering problem that will confront regulators in coming years. Scientists could only predict and control aspects of the natural world when they could be named and classified. Any successful regime of healthcare reform will depend, at a bare minimum, on a flexible yet standardized classification system that can map what health insurers are doing. Like Linnaeus patiently organizing a welter of living forms, regulators will need to taxonomize pullulating permutations of insurer practices.
The Rise of Health Care’s Middlemen
The United States leads the world in payments to private insurance providers. The industry has extraordinary power over access to health care. In 2010, long-standing dissatisfaction with the sector culminated in the Patient Protection and Affordable Care Act (ACA). Congress rejected changes like a public option in healthcare, in favor of a complex and reticulated statutory scheme to better regulate insurers. There have not been dramatic changes in the way that health insurance companies are run, and their stock prices tended to rise as reform became more certain.
The ACA has set in motion dozens of regulatory proceedings. The government also allocated $20 billion toward equipping all medical offices with electronic health records in the 2009 stimulus bill, the American Reinvestment and Recovery Act. Health regulators must now try to catch up with technologically advanced intermediaries in insurance and IT fields.
Immediately after the ACA passed, naysayers on both left and right complained that divisions like OCCIO were unprepared for their new regulatory roles. Perhaps the most compelling case for repealing the ACA is a belief that regulatory agencies will inevitably be captured, or overwhelmed with information from far far better funded attorneys and lobbyists representing insurance and IT firms.*
Nevertheless, the ACA has catalyzed one very important process: the development of an infrastructure of monitoring and reporting that will be necessary for any future informed regulation. It’s shocking to consider how inadequate past reviews were here. As of 1997, the “US Department of Labor had resources to review each employer-sponsored group health plan under its jurisdiction once every 300 years.” The Bush years did not significantly address that shortage. Moreover, “state insurance department staff levels declined 11% in 2007 while premium volume increased 12%.” The personnel simply haven’t been around.
Starting essentially from scratch, Pollitz and her fellow regulators are engaging in a painstaking rebuilding of the foundations necessary for substantial regulation. Having long neglected even to closely monitor the sharp practices of health insurers, federal regulators are now beginning new programs of surveillance.**
*The latter point does appear to be valid with respect to the public record now being compiled in dozens of rulemaking processes. In rule after rule, industry comments overwhelmingly dominate public interest or academic contributions. It’s sad to think that groups like Campaign for America’s Future, or labor unions, having spent so much time getting the ACA passed, are now ceding much of the regulatory field to insurers. On the other hand, given the Administration’s recent appointments, and recent McSurance waivers, who knows whether good comments would have an impact.
X-Posted: Concurring Opinions.
I recently gave remarks as part of a panel at the roundtable “Personal Health Records: Understanding the Evolving Landscape,” sponsored by the Office of the National Coordinator for Health Information Technology (ONC). There were many interesting speakers, including some of the leading businesses in the PHR space and regulators from FTC, HHS, and the California state Office of Privacy Protection. The roundtable exposed the promise–and limits–of a personalized health record model. Databases may help both public health and patient care, but the many stakeholders in PHR’s may have very different views about how much control patients should have over the presentation of their medical selves in everyday life.
Discussions about health records can get forbiddingly abstract and technical, but a real-world dilemma can help concretize the problem. As Lisa Wangsness’s Boston Globe article shows, at least one individual feels “burned” by his effort to quickly port past data into a PHR:
When Dave deBronkart, a tech-savvy kidney cancer survivor, tried to transfer his medical records from Beth Israel Deaconess Medical Center to Google Health, a new free service that lets patients keep all their health records in one place and easily share them with new doctors, he was stunned at what he found. Google said his cancer had spread to either his brain or spine — a frightening diagnosis deBronkart had never gotten from his doctors — and listed an array of other conditions that he never had, as far as he knew, like chronic lung disease and aortic aneurysm. A warning announced his blood pressure medication required “immediate attention.” “I wondered, ‘What are they talking about?’ ” said deBronkart . . .[He] eventually discovered the problem: Some of the information in his Google Health record was drawn from billing records, which sometimes reflect imprecise information plugged into codes required by insurers.
According to one doctor consulted by the Globe, “an inaccurate diagnosis of gastrointestinal bleeding on a heart attack patient’s personal health record could stop an emergency room doctor from administering a life-saving drug.” For the critically or chronically ill, the record is literally a life-or-death matter.
Admittedly, the level of personal control an individual has over a PHR also offers a solution to this problem. If we follow the same model as credit reporting, patients should be able to review their reports without charge, and make corrections. The Markle Foundation has done a superb job highlighting the importance of accountable health technology. But, as the Center for Democracy and Technology argues, rulemaking on EHRs will need to build in a number of consumer safeguards to assure that other stakeholder interests do not trump patients’ interests.
The CDT recommends that HHS require “PHR providers to provide opportunities for consumers to amend, correct or annotate information in a PHR,” and “to have policies for handling disputes concerning information in the PHR.” CDT expands on the obligation in these paragraphs:
Many PHRs contain data from two categories of sources: copies of information obtained from members of the traditional health system (including health care providers, insurers, etc.) and data generated or acquired by consumers themselves, whether directly entered by them, or fed into the PHR by devices or
other sources that are not part of the traditional health care system (including data from a monitoring device that the consumer operates, from a commercial Web site, or from a consumerʼs own health-related observations).
Policies governing disputes about the validity of data should draw a distinction between these different categories of data. With respect to copies of data that users might not be permitted to change directly (including but not limited to data that originates with members of the traditional health system), users should be given a way to attach notes or complaints to the PHR disputing the validity of the data – and the note should remain appended to the data any time it is disclosed from the PHR. (This is similar to how the HIPAA Privacy Rule treats patient amendment of data in covered entity records.) PHR vendors also should consider mechanisms for communicating patient disputes about data back to the original source for consideration.
Even in a world where PHR’s are ubiquitous, there’s almost certainly going to be some “objective health record” in the medical system about any individual. (And, if key software engineers get their way, there will be a unique “personal health identifier” for everyone once health records systems are up and running.) So why should the integrity of PHRs matter to anyone other than the person recording them?
First, the more legible, portable, and useful PHRs are, the more they may displace other records of patient information. Emergency rooms may only have a chance to look at one HR–the one given to them by the patient they are treating.
Second, we can assume that as PHR’s become a bigger part of larger employers’ cost-control programs, they are going to want to make sure that “quantified selves” are accurately reporting their health efforts and achievements. Health reform has taken a “preventive turn,” and the ACA gives employers new latitude to reward and punish employees:
Although it prohibits insurers from charging higher premiums based on an individual’s health risks, it allows them to charge a smoker as much as 50 percent more than a nonsmoker. It also permits employers to increase rewards for participation in wellness and disease-prevention programs from 20 percent to 30 percent of the costs of insurance premiums.
To verify participation, an employer may want access to an employee’s PHR, particularly if it is much easier for its own computer systems to read and understand than the “objective health record” existing in the health care system itself. Yet the employer may also want to ensure that the PHR is populated by materials validated by third parties (such as doctors’ offices, fitness clubs, scales, or blood sugar monitors). Presently, this is not a major issue; as Nicolas Terry warns, “sharing or exchange of data between PHRs and providers or their EMRs is as speculative as it is controversial.” However, technological advances could promote PHRs with inputs from providers, apps, and even RFID chips. What happens if the employer tries to condition participation in a wellness program on an employee’s agreement not to try to change whatever is reported by those “trusted” third parties?
The CDT suggests some principles that should guide this situation as well. They recommend that:
Employers, health plans, and others should be explicitly prohibited from requiring individuals to open PHR accounts as a condition of employment, membership, or for any other reason. PHR accounts should also not be routinely opened for consumers who do not explicitly activate them, as this can expose personal data to uses not necessarily anticipated by the consumer. Similarly, consumers should not be compelled to disclose the information held within the PHR, or whether they are using a PHR, without due process of law.
I believe these “compulsion” points should go beyond the decision to open a PHR, to the more granular rights and responsibilities associated with the maintenance of one. However many times employers sing the praises of contract law, the truth remains that employees in this tight labor market have very little bargaining power. That’s one reason why Nicholas P. Terry’s recommendation of inalienable rights to control data in the PHR context was one of the most provocative and compelling comments at the roundtable.
I am not here advocating for complete autonomy of the patient over records in all contexts. As Sharona Hoffman has argued, in the realm of treatment, there are important rationales for prioritizing the independent medical judgment of professionals whose first obligation is to maintain health:
If patients are empowered to opt out of EHR use or to disallow treating physicians’ access to their records, they may lose much of the benefit of computerization. Many clinicians would continue to care for patients in ignorance of essential facts that could make the difference between appropriate and inappropriate treatment decisions. For example, it might seem at first blush that most physicians would not need access to a patient’s psychiatric records. However, a psychiatric diagnosis may help other specialists better understand the patient’s symptoms, and the patient’s complete drug list, including psychiatric drugs, is vital for purposes of safely prescribing additional medications.
Some commentators at the roundtable also offered creative solutions for the “sensitive health data” conundrum raised by Hoffman; for example, a patient could include an “envelope” in their EHR or PHR that would only be opened in case of emergency, or when authorized directly by the patient. Regardless of how one feels about this issue, outside the treatment context, it is critical for consumers to have reasonable opportunities to review, correct, and withhold their personal health records.
When all is said and done, people have to “buy in” to EHR for it to work effectively, and rational individuals are going to avoid any system where medical history can be as effective as credit history at denying them opportunities. One commentator at the roundtable said that her patients “didn’t care” about health data or security; they just wanted some quick and dirty method of digitizing their records. However compelling this perspective may seem for those “on the front lines,” the perils of “wikileaked world” should end any complacency about the use and misuse of computer records. We should avoid the temptation of letting cut-rate or subpar EHR and PHR systems develop, especially since they are likely to target the most vulnerable patients. Robust regulatory requirements can spark a race to the top for data privacy and security.
In the film Sleep Dealer, a laborer encounters a “memory recorder,” a computerized transcription machine that translates past experiences into video re-enactments. The machine occasionally blanks out as the laborer narrates his story, and its operator chides him to “be more truthful,” to hew closer to the actual truth of the matter. The film is ambiguous as to whether the machine, its operator, or the laborer himself have real access to what actually happened. In the treatment context, best practices may inevitably consign us to a messy, multi-stakeholder effort to set forth the “real truth” of a health record. However, the personal health record should be primarily a project of the person it describes, with no undue influence from the growing number of reputation raters and shapers with a pecuniary interest in particular representations of that person.
Computational innovation may improve health care by creating stores of data vastly superior to those used by traditional medical research. But before patients and providers “buy in,” they need to know that medical privacy will be respected. We’re a long way from assuring that, but new ideas about the proper distribution and control of data might help build confidence in the system.
William Pewen’s post “Breach Notice: The Struggle for Medical Records Security Continues” is an excellent rundown of recent controversies in the field of electronic medical records (EMR) and health information technology (HIT). As he notes,
Many in Washington have the view that the Health Insurance Portability and Accountability Act (HIPAA) functions as a protective regulatory mechanism in medicine, yet its implementation actually opened the door to compromising the principle of research consent, and in fact codified the use of personal medical data in a wide range of business practices under the guise of permitted “health care operations.” Many patients are not presented with a HIPAA notice but instead are asked to sign a combined notice and waiver that adds consents for a variety of business activities designed to benefit the provider, not the patient. In this climate, patients have been outraged to receive solicitations for purchases ranging from drugs to burial plots, while at the same time receiving care which is too often uncoordinated and unsafe. It is no wonder that many Americans take a circumspect view of health IT.
Privacy law’s consent paradigm means that, generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. Some of the brightest minds in cyberlaw have focused on innovation designed to enable such self-protection. For instance, interdisciplinary research groups have proposed “personal data vaults” to manage the emanations of sensor networks. Jonathan Zittrain’s article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
If individuals had enough time to manage their personal data the way they manage their checkbooks and gardens, perhaps the consent paradigm would be a good foundation for addressing public concerns about privacy. If applicants could easily bargain with would-be employers over privacy, or patients with hospitals, perhaps we could rely on them to protect their interests. But actual occurrences of such acts of self-assertion and self-protection are rare. Given the frequently abstract benefits that privacy and reputational integrity afford, they are often traded away for competitive economic advantage. This process further erodes societal expectations of privacy.
A collective commitment to privacy is far more valuable than a private, transactional approach that all but guarantees a race to the bottom. If such a collective commitment does not materialize, record systems will only deserve trust if they become as transparent as the patients and research subjects they profile. Given corporate assertion of trade secrecy (and even privacy rights), reciprocal transparency will not be easy to achieve. Nevertheless, repeated breaches, fraud, and data meltdowns in the US should provoke an alliance of socially responsible researchers to lobby the US government to set minimal standards of reciprocal transparency and auditing. Consumers can only trust innovators if they can understand what is being done with data. As we become “transparent citizens” (as Joel Reidenberg puts it), we should demand that the corporate, university, and governmental authors of that trend reciprocate, and become more open about the data they gather.
Fortunately, as a recent presentation by Deborah Peel reminded me, there is significant audit authority built into the recent HITECH act which may curb some abuses. Audits will become increasingly important as a “wild west” of health data is excavated by scrapers, marketers, and other data miners.
Consider, for instance, the following scenario: contributors to the medical website PatientsLikeMe.com found that “Nielsen Co., [a] media-research firm . . . was ‘scraping,’ or copying, every single message off PatientsLikeMe’s private online forums.” Had the virtual break-in not been detected, health attributes connected to usernames (which, in turn, can often be linked to real identities) could have spread into numerous databases. A reciprocal transparency paradigm would require all those harboring health data to have some certified indication of its legitimate provenance. Data would not be allowed to persist without certification of its provenance.
Unforeseen spread of inaccurate or inappropriate health data is not just a problem for those who want to avoid getting solicitations for burial plots after a sensitive appointment. Given law enforcement exceptions to medical privacy laws and regulations, it should come as little surprise that the government claims that “a 2005 law authorizes it to monitor and record all prescription drug use by all citizens via so-called “Prescription Drug Monitoring Programs.” Such programs may just be the tip of an iceberg of new domestic intelligence programs that rely on private companies to act as “big brother’s little helpers.”
Whenever health data is fed into an evaluative profile of an individual, there should be safeguards in place to assure that the data is accurate, and that the resulting profile is, if at all possible, not used to harm or disadvantage the individual. Without assurances like these, we can count on continued resistance to the development of health data infrastructures.
Last year we did a series of posts on Electronic Medical Records and Electronic Medicine. One of those articles, “Electronic Medicine, iPhones and Path-Dependence” noted the emergence in Electronic Medicine of the iPhone and the Blackberry. We also noted that the iPhone and Blackberry constitute “an advantaged path” (already in the pockets of roughly 64% of doctors, early popularity further attracting skilled labor, financing, and support) and that these platforms might be capable of playing a part in allowing us to avoid building a costly high tech Tower of Babel: offering “flexibility, interoperability, liquidity of information, and the ability to substitute technologies as the need arises.”
We wrote the following:
A Washington Post article, “New Tool in the MD’s Bag: A Smartphone,” states that “Nationally, about 64 percent of doctors are now using smartphones, according to a recent report by the market research company Manhattan Research.” Georgetown’s medical school has recently begun requiring them, and Ohio State’s is handing out the iPod Touch (sans phone) to its students. Mike McCarty, the chief network officer at John Hopkins Health Systems, “believes that smartphones will soon assume a permanent place in medicine.”
As such, designers have engineered applications to suit the needs of those doctors. And as a matter of path-dependence, presumably they will continue to do so. WaPo states that “the iTunes app store lists 674 applications related to medicine available.” There are iPhone and Blackberry apps to “pull up instructional diagrams and videos for patients, write electronic prescriptions and check basic information,” “look up drug-to-drug interactions, to view X-rays and MRI scans,” and even determine pill names derived from physical descriptions.
As we posted a while back,
In the words of Dr. Farzad Mostashari, an assistant commissioner in New York City’s health department and head of the much heralded Primary Care Information Project (which is functioning as a sort of I.T. Department for many of the City’s doctors using EMR), “There’s no way small practices can effectively implement electronic health records on their own. This is not the iPhone.”
Later, we noted that in their NEJM article, No Small Change for the Health Information Economy, Kenneth D. Mandl, M.D., M.P.H., and Isaac S. Kohane, M.D., Ph.D. suggest that it should be. That
As do Professors Sharona Hoffman and Andy Podgurski, the authors of “No Small Change…” stress the need for flexibility, interoperability, liquidity of information, and the ability to substitute technologies as the need arises. To do this they propose governmental encouragement of the use of a platform with interoperable applications (blog builders, think: “plug ins” and “widgets”)
similar to the iPhone.
We also noted in that post, “Electronic Medical Records: It’s Not too Late to Build the Tower on an Interoperable Platform,” that
Perhaps the good news here is that the relative scarcity of EMR implementation thus far means that we can yet still devise an interoperable system without rendering substantial but incompatible investments obsolete. Which is to say that we are not yet too far down nine different non-intersecting roads and that “a communicative Tower” can still be built, and sustained, on a Platform.
Now, it seems the path is beginning to emerge–and that interoperable system may actually be the iPhone and Blackberry platforms–which, it seems, are already sitting in doctors’ pockets.
And now via email from NursingSchools.net, an interesting list:
It’s amazing how much we use our phones for anything but phone calls. The widespread use of applications, driven by the explosion of iPhone sales, has helped to redefine just what we’re able to do with our phones in all walks of life and work. The medical profession has been one of the biggest beneficiaries of iPhone app development, with life-changing tech showing up in nursing schools and hospitals nationwide. Some gather information from patients in new ways, while others help medical professionals better sort and understand that information. They’re all designed to help those in the medical field do their jobs in revolutionary ways. Here are some of the most forward-thinking and revolutionary iPhone apps out there for doctors and nurses:
- e-911: Emerging Healthcare Solutions is developing an app called e-911, which would allow a user to store critical personal medical information that’s sent to health care providers when they dial 911 from their iPhone. The benefits are clear and enormous: Instead of wasting time discovering a medical history, first responders would know instantly what the victim’s medical past looked like.
- Epocrates: One of the most popular free medical apps available for the iPhone, gives doctors and nurses up-to-date information on thousands of drugs, lets them identify pills by physical description, and describes the effects of combining different drugs. A Stanford university doctor even made a video about how much he loves it. (Free)
- ICD9 Consult: Never go hunting through a book to find a code again. This app lists ICD9-CM diagnosis codes and lets you search and browse by category. It includes more than 21,000 individual codes, making it a phenomenal portable tool for medical professionals. ($14.99)
- Human Body Advanced Encyclopedia 3D Anatomy: Don’t let the clunky title fool you: Doctors and nurses everywhere should have this app on their iPhones. The app includes three-dimensional renderings of the body’s 14 anatomical systems as well as the ability to see all sides and angles of organs. It’s like having an anatomy textbook in your back pocket. ($3.99)
- Medscape: From the WebMD people, this is a fantastic all-purpose app that’s packed with information on brand-name and generic drugs, clinical procedures, and more than 150 videos. (Free)
- iRadiology: This app for students is also a good resource for doctors and nurses who’ve been working for years. It features more than 500 images designed to help users hone their detection skills and become better at reading film, CT, and MRI images. It’s a smart, progressive app because it operates under the assumption that knowledge is something you constantly build, and it helps medical pros stay at the top of their game.
- Reach MD CME: This is an awesome app for doctors and nurses looking to further their education in unique and time-saving ways. Reach MD CME is an accredited app for continuing medical education that lets you download and listen to medical programs and then take the certification test all on your iPhone. (Free)
- NeuroMind: NeuroMind is a smart, thorough app that helps residents and surgeons by acting as an index for a variety of brain-related surgical topics. It also provides a checklist of Safe Surgery items from the World Health Organization. (Free)
- Drug Trials: If you’re a doctor or nurse, you need this app. Drug Trials is all about the latest drug tests, whether it’s an established drug being tested in new ways or an entirely new product being tested for the first time. This is one of the best ways to stay informed about what’s happening in drug research, and it also includes facts like eligibility requirements. (Free)
- Informed RN Pocket Guide: The $9.99 cost is more than most apps, but nurses get a lot for that price with this in-depth app. The Informed RN Pocket Guide is a PDF version of the printed book, and it features a ton of helpful information nurses need to know, including metric conversions, pain assessment tools, pediatric care information, and even Spanish translations. Worth the buy.
- Nursing Central: I take it back: This app is the pricey one. Nursing Central requires a subscription payment of $159.95 before you can view the content, but if you can afford it, it’s a worthwhile purchase. The constantly updated database covers more than 5,000 drugs, and it features info on all manner of diseases and treatments plus a dictionary with more than 60,000 (!) entries. If you don’t know it, this app does.
- Nursing Pharmacology: A handy app for nurses that features flash cards designed to teach you the ropes of nursing pharmacology. Basic features, but helpful. ($0.99)
- PubMed on Tap: This is the full version, not the lite one. The PubMed on Tap app searches PubMed for reference info and then lets you store PDFs or e-mail the results to yourself or someone else. For medical pros on the go, or those who need to do some quick research away from the computer, this app is a life-saver. ($2.99)
- Skyscape’s Medical Bag: Call it the digital version of the classical little black doctors’ bag. This app includes a number of helpful tools, including more than 100 medical calculators and multiple articles on life support. ($1.99)
- iMurmur 2: This app is a great fit for practicing doctors as well as med students. It’s got a library of actual recordings of different heart sounds, complete with accompanying descriptions and phonocardiograms. A must-have for cardiologists or any pro looking to brush up on the heart. ($2.99)
[Ed. Note: HRW is pleased to introduce Katherine Matos to the blog. Katherine is a 3rd year student at Seton Hall Law and the principle inventor on a patent application in the field of medical imaging, resulting from her research as a student at Stevens Institute of Technology, from which she graduated with degrees in biomedical engineering and history. She has published work in Health Law Outlook and now serves as an Editor. Read more]
On June 2, Health and Human Services (HHS) Secretary Kathleen Sibelius and Institute of Medicine (IOM) President Harvey Fineberg launched the Community Health Data Initiative (CHDI) at the IOM sponsored Community Health Data Forum in Washington.[i] The CHDI resulted from a March 11 roundtable between HHS and IOM regarding HHS health data usefulness in developing consumer-based electronic health care applications.[ii] As one of five HHS Flagship initiatives, the CHDI is a public-private effort to “help Americans understand health and health care performance in their communities — and to help spark and facilitate action to improve performance.”
Ultimately, a network of community health data suppliers (beginning with HHS) and data appliers (private innovators) will work together to create applications that:
“(1) raise awareness of community health performance,
(2) increase pressure on decision makers to improve performance, and
(3) help facilitate and inform action to improve performance.”
U.S. Department of Health & Human Services, HHS Open Government Plan, page 60, April 7, 2010, available at http://www.hhs.gov/open/plan/opengovernmentplan/ourplan_openhhs.pdf.
To begin the process, HHS will launch a new online Health Indicators Warehouse by the end of the year to provide the public with community health data, free of charge or any intellectual property constraint.[iii] “In every science-based endeavor, data are the key to the effective action,” said Dr. Fineberg at the Community Health Data Forum. “We need to make more creative and vigorous use of the data we generate now, and we need to create a demand-and-use cycle that will bring about even better information in the future.”[iv] While the National Center for Health Statistics continues to develop the Health Indicators Warehouse, an interim site with one downloadable data set has been made available on the CDC website.
When completed, hundreds (ultimately, thousands) of measures of health care quality, cost, access and public health will be downloadable in a standardized, structured format. “National, state, regional, and county health performance on indicators such as rates of smoking, obesity, diabetes, access to healthy food, utilization of health care services” will be accessible in a single location.[v] Also, users will be able to sort data according to age, gender, race/ethnicity and income where available.
HHS is committed to personal privacy protection and confidentially “as a fundamental principle governing the collection and use of data.” In any public data releases, individual identifiable information will be protected. Furthermore, HHS will incorporate new approaches to protect confidentiality while maintaining public access into its data release policies.[vi]
To complete the network, HHS is working with private parties, including technology innovators, researchers, companies, and health advocacy groups to utilize the data and provide feedback. ”As a nation, we can and should harness the exploding creativity in our information technology and media sectors to help us get the most public benefit out of our data investments,” stated Secretary Sebelius.[vii]
In preparation for the Community Health Data Forum, developers such as Microsoft, Google, and Ingenix created software platforms for the presentation of health data.[viii] The Forum featured demonstrations of Web tools for citizen access to health performance data, dashboards for civic leaders to ascertain and improve community health, an online game for learning local health status facts, an enhanced internet search engine that integrates hospital performance data with search results, and mobile phone applications.[ix]
Finally, White House Chief Technology Officer, Aneesh Copra, announced that the administration would host the 2010 Health 2.0 Developer Challenge with the support of HHS and the CHDI.[x] Health 2.0 will host a series of events including multi-disciplinary “code-a-thons,” culminating in the final Challenge at the Health 2.0 Annual Conference October 6-9, 2010.
U.S. Department of Health & Human Services, HHS Open Government Plan, April 7, 2010, available at http://www.hhs.gov/open/plan/opengovernmentplan/ourplan_openhhs.pdf.
U.S. Department of Health & Human Services, News Release: Putting Data and Innovation to Work to Help Communities and Consumers Improve Health, June 2, 2010, available at http://www.hhs.gov/news/press/2010pres/06/20100602a.html.
Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
[i] U.S. Department of Health & Human Services, News Release: Putting Data and Innovation to Work to Help Communities and Consumers Improve Health, June 2, 2010, available at http://www.hhs.gov/news/press/2010pres/06/20100602a.html.
[ii] Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
[iii] U.S. Department of Health & Human Services, News Release: Putting Data and Innovation to Work to Help Communities and Consumers Improve Health, June 2, 2010, available at http://www.hhs.gov/news/press/2010pres/06/20100602a.html. U.S. Department of Health & Human Services, HHS Open Government Plan, April 7, 2010, available at http://www.hhs.gov/open/plan/opengovernmentplan/ourplan_openhhs.pdf.
[iv] Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
[v] U.S. Department of Health & Human Services, News Release: Putting Data and Innovation to Work to Help Communities and Consumers Improve Health, June 2, 2010, available at http://www.hhs.gov/news/press/2010pres/06/20100602a.html.
[vi] U.S. Department of Health & Human Services, HHS Open Government Plan, April 7, 2010, available at http://www.hhs.gov/open/plan/opengovernmentplan/ourplan_openhhs.pdf, page 2.
[vii] Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
[viii] Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
[ix] U.S. Department of Health & Human Services, News Release: Putting Data and Innovation to Work to Help Communities and Consumers Improve Health, June 2, 2010, available at http://www.hhs.gov/news/press/2010pres/06/20100602a.html
[x] Genevieve Douglas, HHS Launches New Data Initiative Focused on Improving Community Health, BNA’s Health Care Daily Report, June 3, 2010, available at http://news.bna.com/hdln/HDLNWB/split_display.adp?fedfid=17265216&vname=hcenotallissues&fn=17265216&jd=a0c3g8b4c1&split=0.
By: Constantina Koulosousas
The Patient Safety and Quality Improvement Rule was amended, effective November 23, 2009, by the Department of Health and Human Services to adjust the maximum civil money penalty amount for violations of the confidentiality provisions. The amount was adjusted for inflation to comply with the Federal Civil Penalties Inflation Adjustment Act of 1990. This amendment was carried out through direct final rule making, as HHS expected no significant adverse comments to the rule.
The Patient Safety and Quality Improvement Act of 2005 created a voluntary program for health care providers to share what is known as “patient safety work product” (PSWP), or any information relating to patient safety events and concerns with each other and Patient Safety Organizations (PSOs). The Department of Health and Human Services is required to maintain a listing of all PSOs.
The Act amended Title IX of the Public Health Service Act for the purpose of improving patient safety and quality of care. As with attorney work product, this information is privileged and confidential. While the program may be voluntary, a knowing or reckless violation of the confidentiality requirements of the Act can result in a civil money penalty of up to $10,000 for each violation, as assessed by the Office for Civil Rights.
The deterrence effect of the civil money penalties had been reduced by inflation. This caused Congress to enact the Inflation Adjustment Act. This Act requires Federal agencies to issue regulations adjusting each civil money penalty found within the Public Health Service Act within their jurisdiction, for inflation. The agencies are required to issue these regulations at least once every four years from July 29, 2005, the date of its enactment. The inflation amount is adjusted through a three-step process.
First, the agency must calculate an increase in the penalty amount by a “cost-of-living adjustment.” “Cost-of-living adjustment” is defined in the act as the percentage for each civil monetary penalty by which the Consumer Price Index for the month of June of the calendar year preceding the adjustment, exceeds the Consumer Price Index for the month of June of the calendar year in which the amount of such civil money penalty was last set or adjusted pursuant to law.
Second, the amount of increase must be rounded based on the size of the penalty as set forth in section 5(a) of the Act. Since the penalty in this case is $10,000, the increase is $1,000, making the final maximum penalty amount $11,000. Finally, the third step requires that a first adjustment be limited to 10 percent of the penalty amount. Accordingly, an $11,000 adjusted penalty is appropriate.
One great benefit of the Act is to make sure that the penalties assessed for such violations provide adequate deterrence to potential violators. This is done by periodically increasing the violation amount to account for inflation over time. Especially now in the wake of the massive health care reform and improvements in the use of Electronic Health Records, it is important to ensure patients that their personal health information remains confidential and that a breach of this confidentiality requirement will result in steep monetary penalties.
On the contrary, many may argue that the increase in the penalty amount is not adequate. Since the Act imposes a 10% cap in addition to a standard chart for calculating the inflation, it may not always be completely in sync with the current economic environment. Further, these penalty amounts are only updated every four years, which leaves a significant gap in time.
Additionally, the slight increase in money penalties assessed will not do much to comfort patients that their health information is protected and confidential. Once the information gets out, there is no amount of money assessed as a violation that can remedy the breach and the damage which may have already been done. Further, to many of the entities involved in such violations, a $10,000 penalty may seem like an insignificant slap on the wrist.
The Act only punishes a “knowing or reckless” violation of the confidentiality provisions, so breaches that occur unintentionally will not subject physicians or PSOs to liability. This mental state requirement is especially important as electronic health record software gets ironed-out, to get rid of any technical issues or glitches that may arise in the course of implementing such a national electronic system.
Conversely, the “knowing or reckless” standard may pose some difficulties enforcing liability under the Act, as it may not always be easy to prove that the confidentiality breach was done with such a state of mind, or even where the disclosure came from.
By: Michael R. Spaltro
Gordon Moore, Intel co-founder, famously predicted that the speed of technology will double about every two years. Between 1981 and 1991, “computer processing speed increased tenfold, the instruction execution rate a hundred fold, system memory grew a thousand times, and system storage expanded by a factor of 10,000.” That was just the beginning. Intel has kept that pace for nearly 40 years, now introducing the world’s first 2-billion transistor microprocessor. The development of fundamental computer technology has translated into ubiquitous information technology infrastructure. Deploying information technology within the healthcare industry is significantly complicated by the indispensability of life and health to everything else we do. The privacy of electronic health records (“EHR”) that contain personally identifiable health information (“PHI”) is one area of particular concern.
Health care providers, health care plans, health care clearinghouses, and their business associates across the country are currently using EHRs as an efficient method to locally store patient records. EHRs may contain patient treatment history, social and demographic data, and a multitude of other personal health information (“PHI”). If the underlying computer technology continues to grow at the staggering pace predicted by Moore’s Law, the function of EHRs will expand to “assume a key roll in medical diagnosis and treatment management.” Moreover, the Food and Drug Administration, in collaboration with public, academic, and private entities, is expected to use EHRs to link and analyze medical safety data from over 100 million patients by July 2012. The resulting electronic network of interoperable healthcare data is of a scale never before contemplated in the industry. Personally identifiable health information, such as the data contained across local provider EHRs, health plan claims databases, and Medicare databases, will be remotely transmitted, stored, accessed, and analyzed.
Transmitting EHRs between an originating entity and the entity/infrastructure involved in research, development, and storage of EHRs, creates an increased potential for internal and external breach. Moreover, as EHRs become populated in local and remote institutions across the country, the incidence of breach ostensibly increases. In the event of breach, an individual may be exposed to a number of dangers. EHRs contain personal information of high value to computer hackers, such as social security numbers or payment information. Furthermore, an otherwise legitimate entity could potentially use health information in a less nefarious way that nonetheless breaches individual privacy. How can we legally protect privacy while realizing the benefit of electronic health information technology?
The Health Insurance Portability and Accountability Act (“HIPAA”) shores up unauthorized access to protected health information. The HIPAA Security Rule and Privacy Rule require an entity such as a health plan, health care provider, business associate, or a health care clearinghouse, to safeguard all protected health information. Civil and criminal penalties are enforced against entities that fail to comply. The FDA’s qualified contractors will similarly be subject to HIPAA under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act by 2017. Therefore, the entire electronic network of EHRs will be covered by the Privacy Rule and the Security Rule. Within covered entities, protected health information is to be stored with any security measure that allows an entity to reasonably and appropriately implement all safeguard requirements. The Security Rule approves that a covered entity may use firewalls and other access controls (such as passwords) to safeguard PHI in its electronic form. Without this intangible structure protecting EHRs, unauthorized parties could easily access PHI and PHI could easily flow out to any individual, device, or system that interoperates with EHR databases. The HIPAA Security Rule therefore assures that a covered entity is reasonably protecting an individual’s privacy by safeguarding personal health information.
Firewalls and other reasonable access controls are not impermeable. Earlier this year, an ultra sophisticated hack attack on Google penetrated the multi-billion dollar corporation, causing it to later withdraw from China. Merck & Co. and Cardinal Health Inc. were among others infiltrated in the attack. The extent of information exposed is still not fully understood. Thus, breaches occur even if reasonable and appropriate safeguards are required. The access controls required by HIPAA in the Security Rule are not sufficient to protect a vast network of interoperable EHRs. Further data encryption and/or secure data destruction will eventually be required to protect individual privacy.
Pursuant to the Privacy section of the HITECH Act, Title XIII Division A, Subtitle D, the Department of Health and Human Services (“HHS”) was required to promulgate breach notification for unsecured protected health information rules and regulations (“Breach Rule”). HHS issued a final rule, effective September 23, 2009, requiring all entities and business associates covered under HIPAA to provide notification in the cases of breaches of unsecured protected health information. Presumably, an individual who is made aware that his personal information was compromised is better equipped to mitigate identity theft or other harms that could arise.
The provisions in Section 13402 of the HITECH Act are consistent with HIPAA definitions of a “covered entity” and “protected health information.” The Act defines breach as the unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security of that information. In other words, if a firewall or reasonably appropriate access control is breached — a covered entity must report that breach to all of the individuals affected. Importantly, notification of breach is only required for unsecured personal health information. If a covered entity is in the practice of encrypting and/or destroying PHI in accordance with the National Institute of Standards and Technology (NIST), then that entity does not have to report a breach of their firewalls or access controls. It is only necessary to provide notice if “unsecured protected health information that is not secured through the use of technology or methodology specified…” is breached. The rationale is obvious. If a covered entity encrypts PHI in accordance with NIST standards, then the data is unusable in the event of a breach, and notification would be superfluous.
Consequently, a covered entity has two choices: (1) secure all EHRs that contain PHI; or (2) report breaches of PHI. The Breach Rule encourages cover entities to take the former approach. To secure EHRs that contain PHI, an entity must regularly perform two standard procedures. First, the NIST published standards recommend a “one pass” method of data deletion for most applications. When electronic data is deleted, it is only removed from the file system. The “image” of the data physically remains on the hard drive of the device. Software and hardware methods of recovering deleted data are available to the public. Therefore, “deleted” PHI data could be recovered by an unauthorized entity in the event of a breach. The NIST recommends that one data overwrite be performed on the deleted data, as to render it unrecoverable. Depending on the method used and size of the database, data deletion can take up to an hour.
Second, and perhaps less straight forward, the NIST recommends data encryption using one the following four methods: full disk encryption; volume encryption; virtual disk encryption; or file/folder encryption. The capital expenditure necessary to install and maintain encryption software/hardware throughout a covered entity is immense. Furthermore, encrypting millions of EMRs will tax computer processors and networks, and will additionally hamper interoperability. When data is encrypted it losses all functionality, and therefore must be decrypted by the authorized end-user before each use. It would be additionally problematic to transfer encrypted data throughout an electronic network, like that contemplated by the FDA, unless all systems were equip to recognize and decrypt the data. Thus, under either of the encryption methods above, the net result is a loss of productivity and interoperability. Moreover, encrypted data may not be mean secure data. The end-user authorized to access encrypted data will likely decrypt it during the course of a work day. Therefore, so-called encrypted PHI would be exposed to the same daily risks as unsecured PHI. Consequently, the nature of data encryption may not even provide the security and privacy that the Breach Rule contemplates.
While some covered entities are voluntarily choosing to encrypt and secure PHI, the impracticality and cost of data encryption is prohibitive. Covered entities were allowed 180 days to become compliant with the Breach Rule. That period has expired, and most covered entities have not opted to encrypt PHI. Instead, covered entities have put reasonable systems in place to detect breaches, as required by the Breach Rule. The Breach Rule requires notification without unreasonable delay once a covered entity learns of a breach. A majority of states already had breach notification laws in place, and thus covered entities had respective systems in place to detect and report breaches.
Reporting breaches under the Breach Rule still requires some capital expenditure. In some cases, notification to popular media outlets and the Secretary is required. This notification could potentially detract business and invite legal action. Of greater concern, a major breach and broadcast resulting in legal action may dissuade industry players from adopting EHR systems that could potentially reduce medical error and healthcare costs. However, the burden of encrypting PHI is overwhelming, and perhaps ultimately ineffective. Consequently, the Breach Rule has done little to foster the actual security of PHI. In practice, covered entities merely provide notification of breach. It is unclear how this may or may not benefit a patient whose privacy has been breached. Deploying new EHR technology throughout the healthcare industry presents a risk to individual privacy that is not adequately addressed by the Breach Rule and HIPAA.
Privacy concerns should positively correlate with the volume of online EMRs. Pursuant to the FDAAA, 100 million EHRs will be linked within the FDA’s seminal network by July 2012. The sensitive and valuable nature of robust EHR databases will likely attract the attention of unauthorized parties around the world, and should therefore warrant a heightened level of security. Within two years, encryption technology may prove to be significantly smarter, cheaper, and more efficient. The concerns that bar covered entities from adopting data encryption may be lifted. While absolute data security is not likely attainable under any standard, software operating systems that integrate on-the-fly encryption would be ideal and foolproof. Rules and regulations should proportionately reflect advances in computer technology and the quantity of EMRs over the next two years. To protect public privacy and trust in our healthcare system, all PHI should eventually be encrypted by covered entities and their business associates.
 Hoffman and Podgurski, Finding a Cure: The Case for Regulation and Oversight of Electronic Health Record Systems, 22 Harv. J. L. & Tech 103.
 Id. at
 Id. at
 Food and Drug Administration Act of 2007 (FDAAA), 21 U.S.C. 355(k)(3).
 See, Hoffman, surpa note 1, at 113.
 21 U.S.C. 355(k)(3). A qualified contract is similar to a business associate. The FDA contracts with entities that are deemed “qualified” within the meaning of the Act.
 See, HITECH, Pub. L. No. 111-5 Section 13401 and 13404.
 See, Hoffman, surpa note 1, at 104.
At this point, it is fair to say that everyone has either heard or read about how Google’s latest foray into social networking, Google Buzz, has gotten off to a bumpy start due to privacy concerns. We can only speculate as to why Google failed to appreciate Buzz’s underwhelming privacy protections. Maybe Google was aware of the privacy issues but felt that they were outweighed by the “turn key” social network that would automatically be created by leveraging the user’s own Gmail contact list. Alternatively, Google may have simply not appreciated the privacy issues. Whether Buzz’s threats to privacy justified the immense firestorm that has occurred is besides the point. Regardless of whether the privacy issues are justified or not, as consumers utilize social networking tools to a greater degree, they are becoming more aware of the potential privacy problems, and are becoming more vocal when they disapprove.
One of the more troubling aspects of Google Buzz was that it automatically created a network of users in your Buzz social network based on the addresses you emailed most in Gmail. Buzz would then automatically start following those contacts. The issue was compounded by the fact that Google made the list of people you were following on Buzz public by default. This automatic follow-and-tell-the-world approach that piggybacked off of Gmail users’ contact list has since been tweaked. Currently, a user joining Buzz is offered suggestions of who to follow, and those whom they choose to follow are not broadcast for the world to see.
A hypothetical within the health care setting may serve to illustrate why this approach was problematic, and will also illustrate why social networking may have profound implications for our “digital health doppelganger.” Under the initial iteration of Buzz, physicians using Buzz who were following the Buzz feeds of their patients would, simply by using the service, make the names of who they were following public to all their other followers. In other words, a patient could see the names of all the individuals that their physician was following, including any who happen to be patients. This situation could be disastrous both personally and economically if the individual was being treated by a physician specializing in schizophrenia or HIV/AIDS–diseases that have, for whatever reason, become highly stigmatized and prone to various discriminatory responses. It is therefore clear that myriad privacy and confidentiality issues arise, including questions of whether such information would be considered protected health information under HIPAA. That the disclosure of fiduciary relationships is troublesome is nothing unique to health care: in the legal profession, the mere existence of an attorney-client relationship can be considered privileged information.
But back to Health IT, an area where our digital health doppelganger is progressing through its adolescence in a landscape of social networks, electronic health records, and a highly fragmented health care delivery system. A number of general areas of concern arise. Including:
1) the online storage of our personal sensitive health information (e.g. in EHR and PHR databases, and Law Enforcement and “Fusion Centers”).
2) current modes of interfacing with our online health data (e.g. access viz. home computer, mobile phone, kiosks).
3) future modes of interfacing with our online health data (e.g. increasing mobile use, RFID, Smartcards, video playback of encounters).
4) how others will access and use our online health data (e.g. Primary care physician accessing our PHR, Site-wide access by Accountable Care Organizations, targeted advertising in PHRs based on the content found within the PHR service or services it can connect to).
5) how we will interact with the health data of others (e.g. PatientsLikeMe.com, increasing meta-analysis of health data available through future nationwide interoperable EHR systems).
6) how our increasingly digitized health care persona will exist alongside our professional and social personas.
Google and Microsoft offer immensely useful services, but which concomitantly force us to more deeply analyze these issues, particularly the last issue, which both feeds back, and is affected by, each of the other issues. More than any other company, Google has sought to integrate their products to make communication and organization as seamless as possible. For example, The to-do list in Google Tasks is, not surprisingly, symbiotic with Google Calendar, while the latter service interfaces with Gmail by scanning the content of a user’s email for the tell tale signs of future events, and and offering to add a calendar entry. For those of you not using Google, the right portion of the picture below illustrates how Google recognizes the contents of the email message, and asks the Gmail user if she wants to add the event to their Google Calendar.
The simple example above makes it easy to imagine similar features being offered in PHRs like Google Health and Microsoft HealthVault–PHRs that are provided by entities that either offer social networking tools alongside their PHRs, or who plan to somehow utilize outside data that is available through other means. As consumers, we must determine how precocious we want our online health persona to be. It must be noted that there is nothing intrinsically wrong with this integration, and such integration certainly offers many benefits to providing better information to patients and physicians.
However, both Google and Microsoft are unique in that they are introducing personal health records to their users who have already ceded to them an extraordinary amount of highly personal information. This raises interesting questions that will test our willingness to integrate our social network with our health identity. For example, how should Google Wave–Google’s new hybrid email/chat service–be interfaced with Google Health? Furthermore, what status will a physician-patient conversation thread on Google Wave or Google Buzz be provided? Is it more like a health record or a phone conversation? Would it be acceptable for Google Health to utilize health related information that it recognizes within your Gmail messages? Even though Google has refrained from displaying targeted ads within Google Health, would the reverse be acceptable, whereby Gmail advertisements are determined based on Google Health data? Would it be inappropriate for Google Health to utilize information about your newly diagnosed diseases to connect you to health-related social networks such PatientsLikeMe?
Users are likely to forget about Google Buzz’s initial oversights, especially in the short-attention span sphere that is the Internet. This is okay, so long as changes are made to appropriately address such glaring issues. We must, however, ensure that we tackle the much more difficult question of what limits to place on the subtle, yet no less powerful, forces that are altering the breadth of our increasingly digitized and integrated online persona. For many of us, the personality of our digital health doppelganger is taking shape on our screens and our smartphones. Are we going to like what we see? And perhaps more importantly, will others?