In collaboration with the Bergen County Prosecutor’s Office; 6 NJ/NY CLE credits. Click here for more information or to register.
Helen Oscislawski, Privacy Risk Assessments and Privacy Challenges
Helen Oscislawski is the founder of Oscislawski, LLC in Princeton. She provides legal guidance on HIPAA, HITECH, state privacy laws, electronic health information exchanges and health information technology to HIEs, RHIOs and ACOs, and counsels other healthcare clients in various matters.
Ms. Oscislawski was appointed by Governor Jon Corzine in 2008 to the New Jersey Health Information Technology Commission (NJHITC) and was reappointed to the NJHITC by Governor Chris Christie in 2010 where she also served as Chair of the Privacy and Security Committee for NJHIT Coordinator. She is the primary author of Update to Privacy and Security Compliance Manual, which was developed for the New Jersey Hospital Association and, most recently, she has developed and authored several editions of the HIPAA-HITECH Helpbook, a manual that combines tools and sample forms that address HITECH changes, state law and other considerations and Meaningful Use and Health Information Exchanges.
Before founding Oscislawski, LLC, Ms. Oscislawski was a healthcare attorney at Fox Rothchild in Princeton, New Jersey, where she counseled healthcare clients on a wide range of legal matters. She received her BA from Rutgers University, Douglass College and her JD from Rutgers School of Law.
Frank Pasquale, Professor of Law, Seton Hall Law School, The Past, Present and Future of Health Privacy
Professor Frank Pasquale is the Schering-Plough Professor in Health Care Regulation and Enforcement at Seton Hall Law School. Professor Pasquale has taught information and health law at Seton Hall since 2004. He has published over 20 scholarly articles. His research agenda focuses on challenges posed to information law by rapidly changing technology, particularly in the health care, internet, and finance industries.
Professor Pasquale is an Affiliate Fellow of Yale Law School’s Information Society Project. He has been named to the Advisory Board of the Electronic Privacy Information Center. He has served on the executive board of the Health Law Section of the American Association of Law Schools (AALS), and has served as chair of the AALS Section on Privacy and Defamation.
Professor Pasquale received his BA from Harvard University (summa cum laude), his M.Phil. from Oxford University, and his JD from Yale Law School.
Jaime S. Pego, Director, Healthcare Advisory Services, KPMG LLP, (along with Joy Pritts, Mark Swearingen, and Frank Pasquale, Moderator) Panel Discussion: The Practical Steps Necessary to Promote Privacy and Cybersecurity in Modern Healthcare Organizations
Jaime S. Pego is a Director in the Short Hills, New Jersey, office of KPMG LLP’s Healthcare Advisory Services Practice and serves as the firm’s National HIPAA Privacy Director. She has substantial experience in healthcare regulatory compliance and healthcare-related advisory services.
Ms. Pego works with a variety of healthcare clients to assist with identifying and preventing compliance risks and complying with federal and state regulations. Her work for KPMG includes serving as lead director for OCR HIPAA audits, as well as acting as Privacy Lead for the KPMG HIPAA national service line assisting covered entities and business associates with HIPAA compliance. She has conducted internal investigations concerning a variety of topics, including fraud and abuse, HIPAA violations, as well as other legal and regulatory matters, and researched and developed compliance policies for institutions in the areas of gifting under the Anti-Kickback Statute and Stark Law, the DRA, HIPAA, EMTALA and others. She participates in the KMPG National HIPAA working group to develop tools and methodologies for client needs, and conducts and manages ICD-10 Impact Assessment at a variety of healthcare organizations to help identify gaps in ICD-10 readiness. She has also served as the firm’s lead manager for health care reform legislative analysis and research.
Prior to coming to KPMG, Ms. Pego was a Local Compliance Officer at a teaching hospital and outpatient center for one of New Jersey’s largest health care systems and has worked with some of the country’s leading health systems. She received her BA from American University and her JD from Seton Hall University School of Law, with a Concentration in Health Law, and is Certified in Healthcare Compliance (CHC) by the Health Care Compliance Association (HCCA).
Joy Pritts, Chief Privacy Officer, ONC, HHS, Meaningful Use Regulations: What Providers Need To Know To Comply
Joy Pritts joined the Office of the National Coordinator for Health Information Technology (ONC), Department of Health & Human Services in February 2010 as its first Chief Privacy Officer. Ms. Pritts provides critical advice to the Secretary and the National Coordinator in developing and implementing ONC’s privacy and security programs under HITECH. She works closely with the Office for Civil Rights and other operating divisions of HHS, as well as with other government agencies to help ensure a coordinated approach to key privacy and security issues.
Prior to joining ONC, Ms. Pritts held a joint appointment as a Senior Scholar with the O’Neill Institute for National and Global Health Law and as a Research Associate Professor with the Health Policy Institute, Georgetown University. She has an extensive background in confidentiality laws including the HIPAA Privacy Rule, federal alcohol and substance abuse treatment confidentiality laws, the Common Rule governing federally funded research, and state health information privacy laws.
Ms. Pritts received her BA from Oberlin College and her JD from Case Western Reserve University.
Anna Spencer, Esq., Sidley Austin, LLP, Data Breaches/Data Breach Notification Requirements and the Need for Encryption
Anna Spencer is a partner in Sidley Austin’s Washington, D.C. office whose practice focuses on health care. Ms. Spencer primarily works on matters involving the privacy and security of health information and she is the firm’s global coordinator for health information privacy. She regularly counsels a broad range of clients on healthcare information privacy and security issues. This includes assisting clients with respect to HIPAA and HITECH and has significant experience in investigating and responding to data breaches and information security incidents. She has represented clients in connection with data breach reporting obligations under the HITECH regulations for breaches of protected health information and defended health care providers in investigations initiated by the Office of Civil Rights, Department of Health and Human Services.
On behalf of covered entities and entities that qualify as HIPAA business associates, Ms. Spencer has developed multiple HIPAA privacy and security compliance and training programs. She has negotiated hundreds of Business Associate Agreements on behalf of various clients.
Ms. Spencer has spoken on privacy/security matters on behalf of numerous groups such as BNA and the American Conference Institute. She has authored a variety of articles on privacy/security issues, Medicare coverage, and fraud and abuse. She is currently authoring a book for BNA on health information privacy. Ms. Spencer received her BA from Sewanee and her JD from Vanderbilt University School of Law.
Mark Swearingen, Esq., Hall, Render, Killian, Heath & Lyman, PC, HIPAA and HITECH Trends (Enforcement and Otherwise)
Mark Swearingen coordinates the HIPAA practice and provides counsel on health information privacy and security matters such as breach response and notification and the creation, use, disclosure, retention and destruction of medical records and other health information at the Indianapolis law firm, Hall, Render, Killian, Heath & Lyman, P.C. His counsel to clients also includes a variety of health care topics related to regulatory compliance, physician and clinical services contracting, risk management and Independent Review Organization services. He has provided such services to a broad spectrum of health system, hospital, physician practice, diagnostic imaging center, ambulatory surgical center and long-term care facility clients.
Mr. Swearingen has spoken and written nationally and regionally on numerous topics, including antitrust, electronic medical records and health information privacy and confidentiality. He is an adjunct professor of a course in Law and Medicine at the Indiana University School of Informatics at IUPUI.
Mr. Swearingen received his BA from Indiana University and his JD from Seton Hall Law School.
Health information law is a very exciting field. Lawyers, doctors, and start-ups are re-thinking health care as an information industry. I’ll be speaking on privacy and fair data practices at an upcoming conference. The relationships between privacy, “big data,” and trade secrecy will bear a great deal of attention in coming years.
Software-based automation has raised living standards dramatically. It makes factories more efficient, renders vast amounts of information accessible, and daily improves quality of life in barely noticed ways. To realize these types of advances in health care, government and NGOs have begun to catalyze better data collection, retention, and analysis. Life sciences companies need to report more data on drugs and devices. Hospitals and doctors are incentivized to use electronic health records via stimulus funding and rulemaking based on the HITECH Act’s meaningful use and certification requirements.
How will traditional intellectual property laws interact with these initiatives? Will the increasing need for cooperation and sharing of information alter the landscape of trade secrecy and other IP protections that have often siloed health data? Will providers find alternative funding sources for the collection, retention, and analysis of data, as some traditional IP protections appear increasingly outdated in a world of “big data” and market-driven transparency?
Medical privacy law has focused on assuring the privacy, security, and accuracy of medical data. The post-ACA landscape will include more concern about balancing privacy, innovation, access, and cost-control. Advanced information technology has raised a number of new questions. Beyond HIPAA and HITECH regulation, consumer protection law plays an important role in these fields. (For example, the FTC recently required firms that “score” the health status of individuals based on their pharmacy records to disclose these records to scored individuals.)
Patients are opting to personalize their health records with the help of cloud computing firms; what law governs this digital migration? There is increasing concern about the role of “incidental findings” in medical research and practice; how will regulators and professional groups address them? When employers demand access to employee health records, in what ways can they use them to profile the employee?
We also need to examine the legal aspects of data portability, integrity, and accuracy. When two health records conflict, which takes priority? What is “meaningful use” of an electronic health records system, and how will regulators and vendors assure interoperability between systems? The course will also cover innovators’ efforts to protect their health data systems using contracts, technology, trade secrecy, patents, and copyright, and “improvers’” efforts to circumvent those legal and technological barriers to openness.
Finally, what are pharmaceutical companies’ past and present strategies regarding the disclosure of their research, including non-publication of adverse results and ghostwriting of positive outcomes? Will a “reproducible research” movement, popular in the hard sciences, reach pharmaceutical firms? Insurer data will also be a target of reformers (including trade-secret protection of prices paid to hospitals, conflicts over the interpretation of disclosure requirements in the ACA, and state regulation of insurer-run doctor-rating sites). Quality improvement and pilot programs will need good provider and insurer data–how we will ensure they have them?
Filed under: Electronic Medical Records, IT, Medical Journals, Medical Malpractice
If one jumbo jet crashed in the US each day for a week, we’d expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that “If we could focus our efforts on just four key areas — failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism — and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year.” The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.
But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, “by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults.” While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is a cornerstone of intuitive learning. Parameswaram offers several passages from James Reason’s book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:
[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks.” In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.
A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. We will need to begin to recognize this regulatory apparatus as a “process of integrating human intelligence with artificial intelligence.” (For more on that front, the recent “We, Robot” conference at U. Miami is also of great interest.)
Another recent story emphasized the importance of filters in an era of information overload, and the need to develop better ways of processing complex information. Kerry Grens’s article “Data Diving” emphasizes that “what lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.”
[F]or the most part, [analysts] rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading. . . . [There is] an entire world of data that never sees the light of publication. “I have an evidence crisis,” [says Tom Jefferson of the Cochrane Collaboration]. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages. . . .
Clinical study reports . . . are the most comprehensive descriptions of trials’ methodology and results . . . . They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected. But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.
There will be a growing imperative to open up all of the data as concerns about the reliability of publications continue to grow.
[U]nlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability. The players sitting across the table from them — the health insurers — are not so profitable. In 2009, their profit margins were a mere 2.2 percent. That’s a signal that the sellers have the upper hand over the buyers.
I don’t agree that insurers are being bullied as buyers. If we’re going to bring up the financial sector, a better analogy would compare pay differentials between revenue-generating traders (providers) and the back office clerical and IT workers (insurers), rather than assume some common baseline of industrial profitability. The health care providers actually (try to) improve health; the insurers (are supposed to) support that primary effort. But overall, the story Klein tells here is broadly consistent with many other explanations of high prices in US health care.
What will solve that problem? Probably not health care reform, though regulators will struggle mightily to impose some discipline via IPAB and other entities. Followers of Clayton Christensen think pure technological innovation may wildly succeed where an oft-captured regulatory system is failing. Farhad Manjoo provides some empirical support for their hopes:
As computers get better, we’ll need fewer humans across a range of specialties. Look at mammography: One of the main ways radiologists can improve their breast diagnoses is by “double reading.” When two radiologists independently examine a collection of mammograms, the number of cancers detected increases substantially. A study published in 2008, however, found that a radiologist who uses ImageChecker can skip the second reading: A computer and a human are just as good as two humans.
[T]he doctors who are the juiciest targets for automation might not be the ones you’d expect. They’re specialists . . ., the most highly trained, highly paid people in medicine. It’s precisely these factors that make them vulnerable to machines. By definition, specialists focus on narrow slices of medicine. They spend their days worrying over a single region of the body, and the most specialized doctors will dedicate themselves to just one or two types of procedures. Robots, too, are great specialists. They excel at doing one thing repeatedly, and when they focus, they can achieve near perfection. At some point—and probably faster than we expect—they won’t need any human supervision at all.
Robots and automation are already taking on prominent roles in wars, factories, and political campaigns. The type of pattern recognition common to some medial specialties may be natural to them, particularly as electronic medical records and digitization take hold. Of course, an all purpose “physician robot” would be a much harder endeavor. In the context of a discussion of rationing, one health law textbook suggests that a mapping of possible interventions “would require rigorous scientific information on each of the almost 10,000 diagnostic entries in the International Classification of Diseases (9th ed.) (known as ‘‘ICD-9’’) and for each of the 10,000 medical interventions listed in the AMA’s Common Procedural Terminology (known as ‘‘CPT’’ codes).” ICD-10 has about 7 times more codes than ICD-9. But just as chess was once considered a field impenetrable to artificial intelligence, and now has been mastered by some computers, so too might medicine itself become subject to the exponential growth in information processing characteristic of mature digitized industries. It’s becoming clear that “the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity.”
So will technology save us from ever-increasing health care costs? I’m not optimistic, because politics and economics are a constraint on all these developments. The same patterns of patronage and tribute that make comparative effectiveness research such a hard sell in the US may well restrain technology adoption. Just as specialists dominate the RUC, they can probably find ways to slow the adoption of technological substitutes for their hard-won expertise. As Umair Haque has observed, “In a neofeudal polity, patronage replaces meritocracy. ‘Success’ for an organization, coalition, or person is to become a client of a powerful patron, pledging your services (soft and hard, informal and formal), in perpetual alignment with the patron’s interests.” We’ll see many physicians in coming years invest time and effort in technological innovation, and others devoted to deterring its spread in order to protect current income streams.
At this point, you’re probably expecting me to side decisively with the technologists as heroes. But I can’t do so. I don’t buy an economic model premised on incentivizing innovation by setting off a race among radiologists (or, more realistically, financiers) to be the first to patent the machine that can replace all the other radiologists. Rather, I think the real foundation for radically productive innovation in this and other fields is a baseline of social support and commitment to retraining for professionals who could be displaced by the technology. I’m not saying, “pay radiologists what they make now, forever.” Rather, I’m trying to articulate a variant of a “guaranteed basic income” argument for those who invest heavily in learning about science, technology, and medicine. This baseline of educated users, improvers, and evangelizers of technology is the foundation of any venturesome economy. As Amar Bhide has explained,
[T]he different forms of innovation interact in complicated ways, and it is these interconnected, multilevel advances that create economic value. . . . To state the proposition in the terminology of cyberspace, innovations that sustain modern prosperity have a variety of forms and are developed and used through a massively multiplayer, multilevel, and multiperiod game.
We may well find that in decades to come, machines can do the jobs of radiologists and pathologists much better than people can. But if that transition occurs, it’s important to recognize how much current specialists invested to attain their skills, how hard they presently work to maintain a high level of medical skill in this country, and how future innovations may well dry up if people feel that those on STEM career paths are utterly vulnerable to being “kicked to the curb” once a machine does their job slightly better. Not only is “sole inventorship” a myth; we often fail to appreciate the complex educational and service apparatus necessary for innovation to take place. As Alperovitz and Daly have shown, any system that grants 93% of its gains to 1% of the people is an ongoing instruction in the economic futility of the efforts of the vast majority of its citizens.
Hospital readmissions for chronic diseases such as asthma, congestive heart failure and diabetes are said to have been estimated to account for over 80% of hospital inpatient stays. In an effort to reduce these admits and consequently lower healthcare costs, AT&T and Intuitive Health have collaborated to pilot a home-based remote patient monitoring solution which would allow patients to spend more time at home and engage in their own care rather than with healthcare providers at medical facilities. Through wireless connectivity provided for by AT&T, the system works to send data from the patients’ unobtrusive personal health device, to a secure software platform integrated to the health ecosystem through Intuitive Health’s technology–emphasis placed on the confidential nature of the transmission of patient’s personal information.
“Innovation is desperately needed outside the four walls of the hospital,” said Eric Rock, CEO and Founder of Intuitive Health. “In order to increase our nation’s quality of care and gain control of our healthcare spending, patients of all ages and technical ability must be given intuitive tools to improve their own health, while remaining engaged and monitored by their caregivers remotely.”
In the April 2010 Position Paper on “Technologies for Remote Patient Monitoring in Older Adults” by the Center for Technology and Aging, it was hypothesized that the U.S. health care system could reduce costs by nearly $200 billion within the next 25 years if remote monitoring tools are utilized for chronic diseases. To be sure, figures are not easily discernible; the amount and types of people who choose to utilize such treatment cannot be easily predicted.
The collaboration between AT&T and Intuitive Health is not the first of its kind; and with the increasing popularity of Smartphones, it is reasonable to anticipate that mobile technology will play a role in rise of the use of remote patient monitoring services. It is, perhaps, however, worthwhile to reconsider Michael Ricciardelli’s related post written three years ago, as a way to evaluate the role technology has and may continued to play in areas of health reform.
Last year I published a piece called “Beyond Innovation and Competition,” questioning the dominance of those values. Economists celebrate innovation and competition as the main source of future growth. Innovation has become the central focus of Internet law and policy. While leading commentators sharply divide on the best way to promote innovation, they routinely elevate its importance. Business writers have celebrated search engines, social networks, and tech startups as model corporations, bringing creative destruction and “disruptive innovation” in their wake. Maximum innovation is the goal, and competition is billed as the best way of achieving it. Players in the vast and dynamic tech marketplace are supposed to constantly strive to innovate in order to attract consumers away from rivals.
In the piece, I explain how both competition and innovation can be as destructive as they are constructive. There are many social values (including privacy, transparency, predictability, and stability), and companies can compete for profits in ways that erode those values. In an era of inequality and hall-of-mirrors stock market valuations, innovations of marginal or negative impact on society at large can be vastly overvalued by a stampede of fickle investors.
The shortcomings of the innovation and competition story also play out in health information technology. Stimulus legislation in 2009 provided many carrots and sticks for doctors to digitize their recordkeeping systems, ranging from bonuses now to reimbursement haircuts later this decade if they fail to implement the technology. Congress structured the incentives to encourage a competitive and innovative marketplace in health information technology. But many doctors are shying away from implementation, in part because they fear that the fast and loose ethics of the market can’t mesh with a medical culture of constant commitment to quality care.
Susan Jaffe’s article for the Center for Public Integrity examines doctors’ fears about adopting any given software suite. According to Jaffe, “570 different electronic health systems certified by private organizations for non-hospital settings may be used to qualify for the” stimulus funds. The long-term consequences of the choice make the jam-shopping examples in Barry Schwartz’s book The Paradox of Choice seem quaint:
The systems can vary in appearance, content, organization and special features. Some can be customized by users in different ways, at no cost or some cost, or not at all. Some are compatible with other systems now, eventually or, some critics say, maybe never. . . . The costs of the systems remain daunting, despite the bonuses, particularly in areas that have been hit hard by an ailing economy.
The pricetag varies widely depending on the type and size of the medical practice, whether new computers are purchased and the extent of customization, among other things. Software alone can cost from $2,000 to $10,000 per doctor. All told, the cost jumps to about roughly $20,000 per doctor, according to a regional extension center consultant who advises physicians in northeast Ohio. On top of that, manufacturers charge hefty annual fees for technical support and periodic upgrades that together can amount to about 35 percent of the upfront costs. The systems are priced in a way that does not make comparison shopping “easy or necessarily valid,” said Dottie Howe, a spokeswoman for the Ohio regional extension center. There is no basic price because each company offers different components, features, options, and level of technical support. . . .
Most manufacturers will also charge the doctors to move the information in their current system to the new one. There could be extra [ongoing, monthly] charges to connect to other systems too.
Doctors have also been burned by sharp operators that emphasize slick salesmanship over solid service:
[T]he Southwest Family Physicians group is worried . . . They bought an electronic health record system five years ago that is now nearly obsolete. The manufacturer was taken over by another company that provides minimal technical support . . . “The salesman said ‘you’re buying a Cadillac, this is going to be the greatest thing,’ ” [one doctor] recalled. But that system can’t display an X-Ray image or send a prescription electronically to a pharmacy. “We’ve got the Model T Ford,” he said.
It does appear that regional extension centers are doing some work to keep pricing reasonable. Jaffe’s article focuses on Ohio, where five “preferred vendors” “agreed to charge prices ‘as good as or better than’ prices offered to other regional extension centers, to provide onsite assistance when a practice turns on its electronic health record system for the first time, offer technical support for at least six years, and limit annual cost increases for continuing technical support, among other things.” But consider the bizarrely proprietary nature of pricing data:
Whether the five preferred vendors offer a better deal than their non-preferred competitors is not known because the state regional extension center doesn’t have pricing information from non-preferred vendors, said Howe, the spokeswoman for the state’s regional extension center. Pricing from the preferred vendors are confidential, she said. And despite their preferred status, the five companies do not guarantee that eligible health care providers who purchase their systems will receive the government’s bonus payments.
I discussed the troubling degree of secrecy in health care before, and I’m very sad to see it persist here. The doctors in Jaffe’s story are making reasonable demands: to be able to understand the nature of the commitment they are making, to avoid big financial losses, and not to be burned by fly-by-night operators attracted only by the government subsidy money. They want to assure that the basic health care values of access, cost-control, and quality are reflected in the software they use.
We are seeing the opening stages of a battle between a medical sector committed to maintaining its own autonomy and traditions, and a tech sector that wants to commoditize health data in as standardized a form as futures markets homogenized corn grades, or credit scores tranched residential mortgage backed securities. Commenting on the demise of Google Health, an informatics expert said that “Google is unwilling, for perfectly good business reasons, to engage in block-by-block market solutions to health-care institutions one by one, and expecting patients to actually do data entry is not a scalable and workable solution.” To be sure, the company can’t expect to make the same profit margins in the health sector as it does in the online ad business. But the “instant millions” ethos of Silicon Valley doesn’t fit well with a sector where we are in principle committed to serving everyone, regardless of ability to pay.
Economist John Van Reenen has observed that the US has a particularly innovative economy in part because our markets are so good at crushing badly run firms. It’s probably good that garden equipment suppliers, toothpaste makers, and pie bakers know they can be out of business in a month or two if they’re “off their game” for a short time. But if I just entrusted three years of medical records to a vendor who suddenly went out of business, I’d take little comfort in the idea that a marginally better competitor had knocked it out of the market. The transition to a new vendor can be slow and costly—doctors in Jaffe’s story speak of seeing 1/3 to 1/2 less patients over weeks or months as they learn a new system.
At a Yale SOM Health Care conference in 2009, the Chief Medical Officer of a major player in the field once remarked to me that choosing an HIT vendor is “like a marriage—you don’t end the relationship lightly.” I first thought that remark was self-serving. But the more one examines the HIT field, the more important it appears to get standard recordkeeping, support capabilities, and interoperability right at the outset, rather than leaving doctors to negotiate the wreckage of several generations of battling systems. Think about how chaotic online music sales seemed before iTunes. Perhaps Apple (whose iPads are already beloved by many docs) is going to bring a swift and highly profitable order to this field, too. I hope the ONC and other decisionmakers will well-regulate whatever behemoth eventually emerges, vindicating the public values that competition and innovation are unlikely to promote.
Photo credits to Aleksandar Šušnjar, Jakub Halun and loki11.
This month, the United Nations (UN) Human Rights Council recognized access to the Internet as a human right. The report was written by UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, and it separately considers access to Internet content and access to the infrastructure required for Internet access. The report cites over 2 billion Internet users worldwide and notes that the Internet has becomes a key means through which individuals can exercise their right to freedom of opinion and expression. La Rue concludes that “there should be as little restriction as possible to the flow of information via the Internet, except in few, exceptional, and limited circumstances prescribed by international human rights law.”
The report seems motivated by recent episodes of political unrest such as the Arab Spring uprisings. La Rue states that the Internet is “one of the most powerful instruments of the 21st century for increasing transparency in the conduct of the powerful, access to information, and for facilitating active citizen participation in building democratic societies.” He notes that countries have been increasingly censoring online information through 1) arbitrary blocking or filtering of content, 2) criminalization of legitimate expression, 3) imposition of intermediary liability, 4) disconnecting users from Internet access, and 5) inadequate protection of the right to privacy and data protection. La Rue recognizes some legitimate reasons to restrict Internet access, like in the case of cyber- attacks, but focuses on how countries often abuse their power and infringe on the rights of their citizens:
In many instances, States restrict, control, manipulate and censor content disseminated via the Internet without any legal basis, or on the basis of broad and ambiguous laws, without justifying the purpose of such actions… Such actions are clearly incompatible with States’ obligations under international human rights law, and often create a broader “chilling effect” on the right to freedom of opinion and expression.
La Rue specifically notes his concern with the “three- strikes-law” in France and the UK’s Digital Economy Act of 2010. Both of these proposals are anti-piracy measures that would impose penalties against Internet users for illegal file sharing and violation of intellectual property rights. The end result could be suspension of Internet service if copyright infringers disregard warnings. La Rue considers that
Cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights.
Article 19 of the ICCPR concerns the right to freedom of expression.
The fundamental human rights doctrine, the Universal Declaration of Human Rights (UDHR), was penned in 1948 just after the end of WWII. In part based on Franklin Delano Roosevelt’s Four Freedoms, the document was largely a response to the atrocities seen in the war. Article 19 of the UDHR states that
“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”
The drafters left the definition of ‘media’ open in anticipation of new technologies, and the Internet and its extraordinary proliferation in recent years is the most relevant form of media in our time.
La Rue, however, does not just depend on this as a basis for his claim that removing Internet access is a deprivation of the basic human right of freedom of expression. He elaborates on how the Internet facilitates the realization of other human rights-
The right to freedom of opinion and expression is as much a fundamental right on its own accord as it is an “enabler” of other rights, including economic, social and cultural rights, such as the right to education and the right to take part in cultural life and to enjoy the benefits of scientific progress and its applications, as well as civil and political rights, such as the rights to freedom of association and assembly. Thus, by acting as a catalyst for individuals to exercise their right to freedom of opinion and expression, the Internet also facilitates the realization of a range of other human rights.
But even if Internet access constitutes a human right, many countries lack access to basic commodities such as electricity, let alone the necessary infrastructure and technologies to access the Internet. La Rue rests on the positive obligation of countries to work towards promoting or facilitating freedom of expression. He encourages countries to develop a “concrete and effective policy… to make the Internet widely available, accessible and affordable to all segments of population.”
La Rue’s report remains the first recommendation in a series of negotiations on how to adopt access to the Internet as a fundamental right. As La Rue concludes, “given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States.”
La Rue is right to understand the internet as a means to effectuate development. The implications for healthcare can, of course, be staggering. An internet connection is no substitute for bread or medicine but that connection makes widely available medical techniques and public health information and makes “remoteness” a somewhat antiquated concept. If global health is to substantially improve, internet access will ultimately be key.
Did you know that buying generics instead of brands could hurt your credit? Or that a subscription to Hang Gliding Monthly could scare off life insurers? Or that certain employers’ access to electronic health records could lead them to classify you as “high-risk” or “high-cost”?
In all these cases, firms use “predictive analytics” to maximize profits. Consumers are the guinea pigs for these new “sciences” of the human. As Scott Peppet argues, it becomes more difficult to opt out of analytics systems as more people use them. What type of world are they leading us to?
Credit Analytics: Should Frugality be Punished?
One credit analytics company determined that buyers of cheap automotive oil were “much more likely to miss a credit-card payment” than those who paid for a brand-name oil. Spending on therapy sessions may also be a red flag. Appearing too frugal, too anxious, too spendthrift—all might lead to higher interest rates or lower credit limits. One R&D head at a credit analytics firm bragged that they consider over 300 characteristics to discover delinquency risk. He was not nearly as forthcoming about how the data is aggregated. Analyzing millions of transactions, the companies observe customers as a gardener might observe a rose garden: weeding out unpromising specimens, and giving a boost to incipient flourishers.
Many have complained about inaccuracy in these new forms of profiling, and consumers’ inability to review and correct digital dossiers collected about them. But let’s just assume that this profiling is correct, and choosing a generic really does correlate with increased credit risk. What’s the social value of this discovery? Maybe credit card companies can reduce rates infinitesimally (and increase profits) by burdening the generic buyers. But I’d be willing to bet that, for every few people whose generic purchases indicate financial trouble, there is another shopper who’s wisely frugal and increasing her chances of successfully repaying all her loans. It seems very odd to penalize the financially responsible merely because they happen to engage in an activity shared by the distressed.
The Dream of the Perfect Profile
Ahh, predictive analysts might reply, you just oversimplify our process. We would never reduce the credit line of someone who purchases generics if that person also, say, has a subscription to Travel and Leisure, or drives a Nexus, or gives over $1,000 a year to the Republican National Committee. They’re not desperate—they’re just careful shoppers. The more information we have, the more fair and accurate we can be. (I can only propose this response, since the industry is so careful about protecting its trade secrets. But this seems like a plausible counterargument.)
Just as free speech advocates often say that the answer to “bad speech” is more or “counter” speech, predictive analysts may argue that the cure for the mistreatment of any given individual is more information about the person’s true motives or opportunities. If privacy advocates are worried that certain surveillance practices will unfairly tarnish the reputation or profile of an individual, the answer is more, not less, information, on that person. The more comprehensive a picture that firms can develop of the individual, the better they are able to properly target resources.
Whatever the merits of this approach, it appears to me that it only applies to one dimension of the credit analytics example above. Rewarding “brand buyers,” in general, is not that likely to alter behavior in ways that could seriously undermine someone’s quality of life. But effectively punishing those who seek therapy or marriage counseling creates a different set of concerns, showing once again the ways in which health care decisionmaking needs to be distinct from the Procrustean forces of market pressures.
Stressed by Sickness in the Risk Society
A recent article by Sharona Hoffman illuminates some problems with pervasive use of health data in predictive analytics.
Employers may obtain and process EHRs [electronic health records] for a variety of reasons. Many require applicants who have received employment offers to provide authorizations for release of medical records in order to verify the individuals’ fitness for duty. At times, employers require records for purposes of workers’ compensation claims, reasonable accommodation requests by individuals with disabilities, or Family Medical Leave Act (FMLA) requests. Employers who are self-insured also process employees’ medical data in order to pay insurance claims.
EHRs will likely provide employers with unprecedented amounts of data. . . . Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers. . . . Employers with access to EHRs containing a wealth of medical information may be sorely tempted to exclude certain individuals from the workforce because of concerns about the employees’ future productivity, absenteeism, or medical costs. To disguise unlawful conduct, employers may not act immediately to withdraw a job offer or terminate an employee, but rather, decide not to promote an individual with a disability or to select her for a layoff at a later time.
In other words, predictive analytics in health can lead to more “death spirals” for the sick: lost employment, lost insurance due to that lost employment, and future inability to find work due to poor health. Hoffman’s concerns about employers sidestepping relevant regulations were reflected in today’s WSJ article on insurance profiling, too:
[G]iant data-collection firms . . . sort details of online and offline purchases to help categorize people as runners or hikers, dieters or couch potatoes. They scoop up public records such as hunting permits, boat registrations and property transfers. They run surveys designed to coax people to describe their lifestyles and health conditions. Increasingly, some gather online information, including from social-networking sites.
For insurers and data-sellers alike, the new techniques could open up a regulatory can of worms. The information sold by marketing-database firms is lightly regulated. But using it in the life-insurance application process would “raise questions” about whether the data would be subject to the federal Fair Credit Reporting Act, says Rebecca Kuehn of the Federal Trade Commission’s division of privacy and identity protection. The law’s provisions kick in when “adverse action” is taken against a person, such as a decision to deny insurance or increase rates. The law requires that people be notified of any adverse action and be allowed to dispute the accuracy or completeness of data, according to the FTC. Deloitte and the life insurers stress the databases wouldn’t be used to make final decisions about applicants. Rather, the process would simply speed up applications from people who look like good risks.
Many aspects of FCRA have been rendered irrelevant by the all-importance of credit scoring—it’s hard to care too much about one’s ability to “correct” one’s credit report if the only thing that really matters is a score whose calculation only contingently depends on any given piece of information in the report. But I had not heard before Deloitte’s assurance that information would “simply speed up” applications, and not “be used to make final decisions.” Quite the creative lawyering behind that distinction.
Relating the Real and the Digital Body
Dan Solove has written extensively on the “digital person,” and perhaps we can see predictive health analytics as an effort to create a “digital body.” As the WSJ reports, we are reaching a point where online “data can reveal nearly as much about a person as a lab analysis of their bodily fluids.” The least we can ask is for the purveyors of data-driven decisionmaking to be much clearer about how they profile individuals. Moreover, in the case of employment, we should seriously consider expanding disability discrimination laws to prevent employers from stratifying employees based on health data. Profits are important, but they shouldn’t come at the expense of sick people who already have enough problems to contend with. As HHS implements PPACA’s promotion of “wellness programs” at workplaces, they should also try to avoid the “Orwellness” of data-driven health profiling.
X-Posted: Concurring Opinions.
Computational innovation may improve health care by creating stores of data vastly superior to those used by traditional medical research. But before patients and providers “buy in,” they need to know that medical privacy will be respected. We’re a long way from assuring that, but new ideas about the proper distribution and control of data might help build confidence in the system.
William Pewen’s post “Breach Notice: The Struggle for Medical Records Security Continues” is an excellent rundown of recent controversies in the field of electronic medical records (EMR) and health information technology (HIT). As he notes,
Many in Washington have the view that the Health Insurance Portability and Accountability Act (HIPAA) functions as a protective regulatory mechanism in medicine, yet its implementation actually opened the door to compromising the principle of research consent, and in fact codified the use of personal medical data in a wide range of business practices under the guise of permitted “health care operations.” Many patients are not presented with a HIPAA notice but instead are asked to sign a combined notice and waiver that adds consents for a variety of business activities designed to benefit the provider, not the patient. In this climate, patients have been outraged to receive solicitations for purchases ranging from drugs to burial plots, while at the same time receiving care which is too often uncoordinated and unsafe. It is no wonder that many Americans take a circumspect view of health IT.
Privacy law’s consent paradigm means that, generally speaking, data dissemination is not deemed an invasion of privacy if it is consented to. The consent paradigm requires individuals to decide whether or not, at any given time, they wish to protect their privacy. Some of the brightest minds in cyberlaw have focused on innovation designed to enable such self-protection. For instance, interdisciplinary research groups have proposed “personal data vaults” to manage the emanations of sensor networks. Jonathan Zittrain’s article on “privication” proposed that the same technologies used by copyrightholders to monitor or stop dissemination of works could be adopted by patients concerned about the unauthorized spread of health information.
If individuals had enough time to manage their personal data the way they manage their checkbooks and gardens, perhaps the consent paradigm would be a good foundation for addressing public concerns about privacy. If applicants could easily bargain with would-be employers over privacy, or patients with hospitals, perhaps we could rely on them to protect their interests. But actual occurrences of such acts of self-assertion and self-protection are rare. Given the frequently abstract benefits that privacy and reputational integrity afford, they are often traded away for competitive economic advantage. This process further erodes societal expectations of privacy.
A collective commitment to privacy is far more valuable than a private, transactional approach that all but guarantees a race to the bottom. If such a collective commitment does not materialize, record systems will only deserve trust if they become as transparent as the patients and research subjects they profile. Given corporate assertion of trade secrecy (and even privacy rights), reciprocal transparency will not be easy to achieve. Nevertheless, repeated breaches, fraud, and data meltdowns in the US should provoke an alliance of socially responsible researchers to lobby the US government to set minimal standards of reciprocal transparency and auditing. Consumers can only trust innovators if they can understand what is being done with data. As we become “transparent citizens” (as Joel Reidenberg puts it), we should demand that the corporate, university, and governmental authors of that trend reciprocate, and become more open about the data they gather.
Fortunately, as a recent presentation by Deborah Peel reminded me, there is significant audit authority built into the recent HITECH act which may curb some abuses. Audits will become increasingly important as a “wild west” of health data is excavated by scrapers, marketers, and other data miners.
Consider, for instance, the following scenario: contributors to the medical website PatientsLikeMe.com found that “Nielsen Co., [a] media-research firm . . . was ‘scraping,’ or copying, every single message off PatientsLikeMe’s private online forums.” Had the virtual break-in not been detected, health attributes connected to usernames (which, in turn, can often be linked to real identities) could have spread into numerous databases. A reciprocal transparency paradigm would require all those harboring health data to have some certified indication of its legitimate provenance. Data would not be allowed to persist without certification of its provenance.
Unforeseen spread of inaccurate or inappropriate health data is not just a problem for those who want to avoid getting solicitations for burial plots after a sensitive appointment. Given law enforcement exceptions to medical privacy laws and regulations, it should come as little surprise that the government claims that “a 2005 law authorizes it to monitor and record all prescription drug use by all citizens via so-called “Prescription Drug Monitoring Programs.” Such programs may just be the tip of an iceberg of new domestic intelligence programs that rely on private companies to act as “big brother’s little helpers.”
Whenever health data is fed into an evaluative profile of an individual, there should be safeguards in place to assure that the data is accurate, and that the resulting profile is, if at all possible, not used to harm or disadvantage the individual. Without assurances like these, we can count on continued resistance to the development of health data infrastructures.
HIPAA, The HITECH Act, and How Google May Still Be Able to Distribute, and Profit From, Your Personal Health Info
Below I will explore what seems to be a gaping hole in the HITECH Act. However, as with any new legislation, it is often necessary to reexamine the laws that preceded it, which in this case is HIPAA. This is particularly true given that the HITECH Act does not replace HIPAA. Rather, it provides–amongst other things–additional security and privacy safeguards with respect to health information. To that extent, at least a cursory reexamination of HIPAA is required before understanding HITECH and the importance of comprehensive legislation.
HIPAA was a product of the 1990′s–an era triggering nostalgic memories of grunge music for some, and the (in)famous Macarena dance for others. For a large part of this period, the Internet was accessed by a handful of tech savvy individuals who dialed into services like CompuServ, Prodigy, and AOL. It was during this transition that Congress felt the need to make health insurance more portable, as well as standardize the variegated electronic systems that were conducting nonstandard healthcare-related transactions. There was a concomitant concern that health information needed better protection. Thus, in 1996 Congress adopted the Health Insurance Portability and Accountability Act (HIPAA), providing HHS with the responsibility to enforce it. However, the regulation enforcing privacy and security of health information would not be implemented until years later.
HIPAA’s Privacy Rule, which describes the appropriate use and disclosure of certain health information, came into force on April 14th, 2001, updated in 2002, with compliance required by April of 2003. The Security Rule, which establishes the policies and best practices for securing health information, came into force in 2003. Thus, the Privacy and Security Rules (referred to below as HIPAA) came to life in a period of technological transition. New technologies like residential broadband Internet access and Wi-Fi networks were becoming the norm. Electronic Health Record (EHR) systems had been developed, but had only marginal penetration within certain academic medical centers and government entities. Consequently, the threats to patient privacy from early EHRs was much smaller than it is today, since these systems were not widespread and did not often share data over disparate regions. Thus, access to the systems was not necessarily available outside of the intranets where the servers were located.
Acronyms of HIPAA & HITECH
Protected Health Information
Any oral or recorded information relating to any past, present, or future physical or mental health of an individual, provision of healthcare to the individual, or the payment for the healthcare of that individual.
A group of entities whose use, disclosure, and protection of PHI is regulated by HIPAA and HITECH. CEs are comprised of:
Individuals or organizations performing an activity involving the use or disclosure of PHI on behalf of the CE. BAs can include attorneys, accountants, shredding companies, billing companies, or any other person or organization that is not a CE but which is accessing a CE’s PHI.
Electronic Health Record
An electronic record of patient care comprised of information about the delivery of care, including demographic information, medications, diagnoses, etc.
Personal Health Record
An electronic record of patient care comprised of much of the same information that an EHR is comprised of, but which is created and maintained by the individual (usually a patient) as opposed to a provider. Prominent examples are Google Health and Microsoft HealthVault
Given the historical context of HIPAA’s passage, it is easy to appreciate HIPAA’s missteps in not specifically focusing on EHRs or PHRs. Rather, HIPAA regulates protected health information at a broader level, focusing primarily on the “use and disclosure” of PHI by CEs, and the best practices and policies for securing the PHI itself. To be fair, the Security Rule does focus on PHI that is stored and transmitted electronically. However, even the most stringent best practices and policies are useless if the corresponding privacy regulations are inadequate.
But the times they are a-changin’–sort of.
Buried on page 112 of the American Recovery and Reinvestment Act (ARRA)–also known as the Stimulus Bill–is Title VIII of the bill, known as the Health Information Technology for Economic and Clinical Health Act, or more commonly, the HITECH Act. One (of the many) purposes of the HITECH Act is to fill in the gaps that have emerged since the Privacy and Security rules came into force. But like before, we are in a transition period. Whereas HIPAA’s passage coincided with a period of generalized transition towards digital information, HITECH has coincided with its own transition: the implementation of personal health records (PHRs). Unfortunately, the current HITECH Bill and regulations have serious flaws in how they protect patient information stored in PHRs. However, before discussing the problems, it is only fair to discuss the benefits to privacy and security that HITECH’s passage has provided.
Specifically, HITECH introduces breach notification requirements. HITECH’s provisions govern the procedures which CEs and BAs must follow if health information has been compromised. HITECH also empowers the FTC to promulgate regulations pertaining to the notification procedures of PHR vendors (as well as those who offer services to PHR vendors). The FTC’s proposed breach notification requirements can be found here. Thus, CEs, BAs, and PHR vendors are, for the first time, required by law to notify individuals if their unsecured PHI has been accessed by unauthorized individuals. Surprisingly, this was not required under HIPAA. CEs were obligated to notify individuals only insofar as the CEs were required by HIPAA to mitigate damages. But now, with the passage of HITECH, breach notification is no longer amorphous, but is spelled out in detail in HITECH’s regulations.
Additionally, HITECH requires BAs to abide by many of the same privacy and security requirements that CEs have had to abide by. Before HITECH, a BA, such as an attorney reviewing the PHI of a CE, was required to sign an agreement promising to protect the PHI that they were accessing, but were not themselves regulated by HIPAA. Thus, BAs had only contractual liability to the CE if the BA violated the rules of the agreement. On the other hand, if a CE violated HIPAA, it was subject to specific penalties and fines by the government.
Under HITECH, BAs must now comply with much of the Privacy and Security Rule, and face many of the same penalties and fines if they violate HIPAA regulations. That is, BAs are now accountable to the government if they improperly use or disclose PHI, or fail to adequately secure PHI.
HITECH also offers other benefits, such as increased enforcement of violations, a strengthening of the requirement that only the minimum necessary information is disclosed to other CEs or BAs, a more thorough framework of accounting for uses and disclosures, as well as a certain prohibitions on the sale of PHI.
The last benefit of HITECH–the prohibition on the sale of PHI–is a perfect springboard for discussing the potential pitfalls of HITECH. The benefits of HITECH may well be sufficient to shore up HIPAA’s gaps when it comes to regulating CEs and BAs. However, as HITECH’s regulatory language makes clear, there remains a gaping hole:
(d) Prohibition on Sale of Electronic Health Records or Protected Health Information-
(1) IN GENERAL- Except as provided in paragraph (2), a covered entity or business associate shall not directly or indirectly receive remuneration in exchange for any protected health information of an individual unless the covered entity obtained from the individual, in accordance with section 164.508 of title 45, Code of Federal Regulations, a valid authorization
The emphasis is added to underscore that PHRs are not included in this provision. There is no corresponding provisions in the FTC’s proposed regulations which concern breach notification. The upshot of this is that, as of the date of this posting, PHR services like Google Health and Microsoft HealthVault are not subject to this prohibition, nor is there a provision in HITECH mandating that PHRs comply with HIPAA’s Privacy and Security Rule. Therefore, PHR vendors can use, disclose–and possibly even sell–an individual’s health information outside of the HIPAA and HITECH regulations. This problem underscores a larger issue: PHRs are not regulated by HIPAA, and only regulated by HITECH insofar as the FTC’s interim rule requires certain breach notification procedures. Read more
Filed under: Electronic Medical Records, EMR, HHS, IT
President Obama has appointed Dr. David Blumenthal as the National Health Care Information Technology Coordinator. Dr. Blumenthal is a former Harvard Medical School Professor who, as reported by Kaiser.org, “has conducted a number of studies related to health care IT” and has “served as director of the Institute for Health Policy at the Massachusetts General Hospital/Partners HealthCare System and as a senior adviser to President Obama during his campaign.”
As National Health Care IT Coordinator, Dr. Blumenthal can be expected to play a large role in the direction of how the 19 billion dollars apportioned for Health IT in the recently enacted stimulus package will be spent.
Dana Blankenhorn over at ZDNet Healthcare has written a short and interesting post on Dr. Blumenthal. Among other things worth noting in the post, Blankenthorn writes that Blumenthal has been quoted as “saying IT grants should go to inner-city and rural hospitals, as well as small practices, while most health IT money should go to incentives for improving the quality of care.”
As for the choice of Dr. Blumenthal, Blankenhorn writes
The good news is he’s a policy expert and not a vendor. The bad news is he’s a policy expert and not a technologist. He is a renowned health IT advocate who knows his way around bureaucracies but he is not a geek.
This means Blumenthal has not expressed a view on open source vs. proprietary software. He also hasn’t gotten his hands dirty in the health IT trenches.
Having said that, one might hope that Dr. Blumenthal is familiar with the work of Professors Sharona Hoffman & Andy Podgurski.
Steve Lohr of The New York Times has written an article, “How to Make Electronic Medical Records a Reality” (a follow-up to “Health Care That Puts a Computer on the Team” 12/26/08) that it is well worth taking the few minutes requisite to read it.
Professors Sharona Hoffman & Andy Podgurski have published an article in the Harvard Journal of Law & Technology that should be on Obama’s nightstand. “Finding a Cure: The Case for Regulation and Oversight of Electronic Health Records” will take more than a few minutes to read, but for those charged with the responsibility of making the prospect of Electronic Medical Records a reality, it should be required reading–because, as the authors point out, we simply cannot afford to get this wrong:
The benefits of EHR systems will outweigh their risks only if these systems are developed and maintained with rigorous adherence to the best software engineering and medical informatics practices and if the various EHR systems can easily share information with each other. Regulatory intervention is needed to ensure that these goals are achieved. Once EHR systems are fully implemented, they become essential to proper patient care, and their failure is likely to endanger patient welfare.
The Journal article is essentially a map, designed to point out hazardous terrain and harness the resources at hand to effectuate a comprehensive Electronic Health Record system– and, through interoperability and regulated standards, to prevent the creation of a costly high-tech Tower of Babel. As the authors remind us, in this territory, malfunction and miscommunication can be deadly–and the concerns of the market are not necessarily coextensive with the common weal.
For those of us who have an interest in the subject, and are convinced that it is essential to have a comprehensive guide (if not a blueprint) for “how to get this right” — take heart–it’s here, and I highly recommend you take the time to read it–and then pass it on and up until it reaches that nightstand, if it’s not already there.
How to Make Electronic Medical Records a Reality
The NY Times article depicts the paucity of EMR use at present (17%) in terms of ”market failure,” and points out that U.S. Government guidance and investment in growing (“jump-starting”) industry and technology is not novel. Lohr writes:
…computer technology and the industry really flowered in the United States. That happened in no small part because the federal government nurtured the market with heavy investment, mainly by the Defense Department, and by choosing standards, like the Cobol programming language.
Today, Washington is about to embark on another ambitious government-guided effort to jump-start a market — in electronic health records. The program provides a textbook look at the economic and engineering challenges of technology adoption.
Lohr correctly points to the chasm which exists in EMR usage between large practices and small, and the failure of the market to incentivize further usage by doctors in these smaller practices. Lohr states:
These larger groups have the scale to invest in information technology, and they are often insurers as well as providers, so they benefit directly from the cost savings. Yet these large groups are the exceptions in American health care. Three-fourths of the nation’s doctors practice in small offices, with 10 doctors or fewer. For most of them, an investment in digital health records looks like a cost for which they are not reimbursed.
It is that “market failure,” says Lohr that the Obama plan seeks to address. To that end, the legislation which has devoted $19 Billion towards this “jumpstart,” “calls for incentive payments of more than $40,000 spread over a few years for a physician who buys and uses electronic health records.”
The legislation also requires that this payment to doctors be in exchange for “meaningful use,” but thus far the term has been left undefined.
We addressed both of these concerns on this blog in mid-January in response to a post on Health Affairs by Dr. David Brailer, Chairman of Health Evolution Partners, a health care investment fund. Read more
Mark Heftler, a geriatric care manager who is slated to begin study at Seton Hall Law in the Fall, has written an interesting article on RFID (Radio Frequency Identification) and its potential usage as a means of early diagnosis of dementia among the elderly. Researchers at the University of South Florida have developed and tested an RFID technology which assesses the walking patterns of those which it monitors.
By monitoring the movements of the elderly within geriatric facilities, “the researchers hope to be able to diagnose the onset Alzheimer’s in their patients. Sudden veers, long pauses, and a tendency to wander are all indicators of dementia.”
As MIT’s Technology Review notes, “Drugs that are currently available can only slow the progression of related diseases, so the earlier dementia is caught, the better a patient’s treatment will be.”
Technology Review also notes, “In particular, dementia increases the risk of injury caused by a fall… ‘That’s a huge problem for assisted-living facilities,’” said William Kearns, an assistant professor who researches aging and mental health at USF.
Not Just Grandma
Although one can readily see the positive cost/benefit and quality of life implications of warding off the falls of the elderly, as Frank Pasquale recently noted on both this blog and Concurring Opinions, the proliferation of “personal” electronic data is not without its danger.
The Technology Review article provides a link to another article which points out that RFID technology is also being harnessed to gather social networking information through what is referred to as “reality mining,”
“…a field that Tanzeem Choudhury pioneered as a PhD student at the MIT Media Lab. Working at Intel after graduation, she created a pager-size sensor pack–loaded with software plus microphones, accelerometers, and other data-gathering devices–to collect and analyze data about human interactions and activity. For instance, by processing verbal utterances, she can identify the most influential people in a social network.
Now an assistant professor of computer science at Dartmouth, Choudhury is conducting experiments with the sensor-laden iPhone. Within a few years, she says, simple versions of her software could be available for cell phones.”
Filed under: Electronic Medical Records, IT, Prescription Drugs
America needs electronic medical records (EMR). There are plenty of reasons why we are so far behind other nations in consolidating medical data: lack of strong central leadership on the issue, unwarranted faith in markets to produce solutions, and overwhelmed medical professionals who have little if any slack time to put a new system into place. Even as President Obama pushes for investment in EMR, privacy concerns are also slowing down progress:
Lawmakers, caught in a crossfire of lobbying by the health care industry and consumer groups, have been unable to agree on privacy safeguards that would allow patients to control the use of their medical records. . . . The data in medical records has great potential commercial value. Several companies, for example, buy and sell huge amounts of data on the prescribing habits of doctors, and the information has proved invaluable to pharmaceutical sales representatives.
“Health I.T. without privacy is an excellent way for companies to establish a gold mine of information that can be used to increase profits, promote expensive drugs, cherry-pick patients who are cheaper to insure and market directly to consumers,” said Dr. Deborah C. Peel, coordinator of the Coalition for Patient Privacy, which includes the American Civil Liberties Union among its members.
Health IT turns out to be one many areas where a drive for prononymity–that is, the de-anonymizing of records of on- and off-line life–is running up against a wall of wary citizens and consumers. In the health field, I think that resistance is only going to end if we have a robust “backstop” of health care in place so that citizens don’t have to worry about losing all coverage if a digital dossier presents them as a bad risk. (Medicaid as presently constituted does not count.) Far from overwhelming the health care system with pent-up demand, universal health coverage may be a prerequisite for generating support for the type of EMR that will provide us all with far better care.
A trend to prononymity in general should be matched with greater commitment to assuring that it won’t result in particularly harsh results. For example, people should not be denied a job for being identifiable as a Democrat in a blog post, whatever Monica Goodling thinks. Nor should doctor’s notes about a patient’s dark thoughts come back to haunt the patient when she or he applies for medical insurance. And if they do, there should be a genuine insurer of last resort available–not the patchwork of Medicaid and charity care that presently leave so many uninsured people falling through the cracks.
That’s one reason why I advocate the development of a Fair Reputation Reporting Act, which would allow individuals to know the documentary basis of certain key adverse decisions. I summarize the proposal here:
Reputation regulation has become essential because traditional restrictions on data flows inadequately constrain decisionmakers and important intermediaries (including search engines and bulletin boards). . . . Persistent and searchable databases now feed unprecedented amounts of poorly vetted information into vital decisions about employment, credit, and insurance. Rumors about a person’s sexual orientation (or experiences), health status, incompetence, or nastiness can percolate in blogs and message boards.
Even if the First Amendment and anonymity protect the authors of such rumors, affected individuals deserve to know whether certain important decisionmakers rely on them. In limited cases, the intermediary source of the information should also provide the target of a derogatory posting with the opportunity to annotate it. A Fair Reputation Reporting Act would empower individuals to know the basis of adverse employment, credit, and insurance decisions-and to go to their source (and the source of their salience) to demand some relief from digital scarlet letters.
In summary, privacy concerns are only likely to die down if individuals know either 1) that the consequences of a privacy breach are not likely to be severe or 2) that they can find out instances of the improper use of data. In the health care context in the US, neither qualifier holds: the individual insurance market routinely denies care to individuals on the basis of pre-existing conditions, and individuals have little sense of exactly how such determinations are made. Prononymity needs to work both ways: if our health conditions are to be the subject of increasing availability, so too must the decision-making processes that could use that data to our detriment become more transparent.
PS: Market mavens may promote a “Google Health Search” as the optimal solution here. If this 800 pound gorilla can get all the publishers in line to settle their copyright claims, perhaps it has some chance at bringing the medical industry to heel; however, the political power of doctors and insurers dwarfs that of publishers. The concentration of that much data in one company should also provoke some worries.
Dr. David J. Brailer, appointed by President Bush in 2004 as the first National Coordinator for Health Information Technology, has written an article for Health Affairs worth reading. Dr. Brailer notes that President-elect Obama “has pledged $50 billion to bring health information tools into widespread use (which is $49,950,000 more than President Bush gave me to spend).” (Note: as the present budget for the office of National Coordinator is a little more than $66 million, I believe Dr. Brailer meant to say that the budget during his tenure was roughly $50 million, which would make Obama’s $50 billion $49,950,000,000 more. Apparently, I’m not the only one confused by billions).
Having said that, Dr. Brailer has some suggestions worth noting, not the least of which is that ensuring structural compatibility and integration of data systems are paramount necessities which will require more than just “hiring the geek squad.” He states
Setting up an electronic health record is a complex task, requiring data integration, clinical algorithms and complex software customization. Likewise, helping physicians and other health care workers learn to work with electronic tools is more than point-and-click training. Electronic health records change the very nature of health care work – clinical decision-making, communications, documentation and learning. Our national transition to digital medicine requires a large supply of specialists – upwards of 50,000 people, including physicians, nurses and pharmacists – who understand both clinical medicine and information technology. It takes years to train these people, and they are already in short supply, so now is the time to start.
I have no contention with the assertion that “setting up an electronic health record is a complex task,” and surely, at the end of a $50 billion investment no one wants to look up to see a Med e-record Tower of Babel. But Dr. Brailer’s assertion that “helping physicians and other health care workers learn to work with electronic tools is more than point-and-click training” is somewhat at odds with recent articles in The NY Times, one of which shows what an electronic medical record looks like and explains how pertinent and potentially life saving information “is just a few clicks away.”
Dr. Brailer also states that we need to address what he characterizes as
…the growing chasm between the physicians and hospitals that have electronic records and those that do not. Most large and urban hospitals as well as larger physician practices are far along in using electronic health records. Rural hospitals, nursing homes and small physician practices lag far behind. They face many barriers, but foremost among them is the lack of capital to purchase and implement information tools.
Dr. Brailer states that “Sales pipelines and hospital and physician budgets show that electronic health record purchases have slowed, indicating that the market wave has gone as far as it can. Now is the time for government incentives to help along those who do not have these systems.”
But Brailer wants to incentivize the “use” of electronic medical records much in the way that Congress has done so regarding “electronic prescribing.” He states: “Medicare pays physicians a 2% bonus for using eprescribing on appropriate patients starting in 2009, and this incentive converts to a 3% penalty for those who do not eprescribe in 2013.”
Of course, Brailer is right to make the distinction between “purchase” and “use.” No one wants to subsidize a high tech, dust gathering coat rack. He makes the point that “We should not incent physicians and hospitals simply to purchase electronic records. We get no benefit when a physician or hospital buys an electronic record. What we should do is reward the use of these tools as part of a patient’s care.”
What he fails to address, however, in this incremental ROI “pay for use” approach is what he characterizes as the “foremost barrier” to those “Rural hospitals, nursing homes and small physician practices” on the other side of e-med record chasm: initial capital outlay.
Considering the financial difficulties of many hospitals-and the chilled credit markets- it is somewhat difficult to envision how the gradual return on investment through “pay for use” will offer great affect for those medical service providers who, at present, have a “lack of capital to purchase and implement information tools.” It is not, however, hard to envision how such a continuous “pay for use” incentive would benefit those larger providers who have already implemented electronic medical record systems.
Additional payments each time they used what they have already invested in would, no doubt, provide an additional dividend which these typically larger providers would greatly appreciate. It is not at all clear, however, that such a program, requiring significant investments of capital-which may well not be available at this time-will lessen the “chasm” by any great measure.
The New York Times has reported that
For most doctors, who work in small practices, an investment in electronic health records looks simply like a cost for which they will not be reimbursed. That is why policy experts say any government financial incentives to use electronic records – matching grants or other subsidies – should be focused on practices with 10 or fewer doctors, which still account for three-fourths of all doctors in this country. Only about 17 percent of the nation’s physicians are using computerized patient records, according to a government-sponsored survey published in The New England Journal Of Medicine.
The Times also reports that those who are presently using electronic medical records tend to be part of larger health care organizations.
“Health Evolution Partners invests in the world’s leading health care companies. We seek out companies that are driving critical shifts in how health care is financed, organized and delivered.”
- Build strategies with unusually high potential
- Navigate and mitigate business, policy and regulatory risks
- Develop and shape the market for their products and services
- Enhance the growth and returns for their shareholders