Filed under: Health Insurance, Information Technology, Patient Protection and Affordable Care Act, Quality Improvement
Cross-Posted at Bill of Health
On Wednesday night, I went to a panel presentation sponsored by the group NYC Health Business Leaders on the roll out of New York State’s health insurance exchange. Among the speakers was Mario Schlosser, the co-founder and co-CEO of the venture-capital-backed start-up health insurance company Oscar Health, which offers a full range of plans through New York’s exchange. As NPR reported last month in a story about Oscar, “it’s been years since a new, for-profit health insurance company launched in the U.S.”, but the Affordable Care Act created a window of opportunity for new entrants.
Schlosser began his talk by giving us a tour of his personal account on Oscar’s website, www.hioscar.com. Among other things, he showed us the Facebook-like timeline, updated in real time, which tracks his two young children’s many visits to the pediatrician. He typed “my tummy hurts” into the site’s search engine and the site provided information on what might be wrong and on where he might turn for help, ranging from a pharmacist to a gastroenterologist, with cost estimates for each option. Additional searches yielded information on covered podiatrists accepting new patients with offices near his apartment and on the out-of-pocket cost of a prescription for diazepam (which was zero, since there is no co-payment for generic drugs for Oscar enrollees).
As an audience member noted, none of this is new exactly. What is new is to have this kind of data-driven, state-of-the-art user experience being offered by a health insurer. Schlosser told the audience that Oscar’s pharmacy benefit manager and other vendors are providing the company with real-time data that other insurers have not demanded. And, according to Schlosser, Oscar’s customers are responding. Nearly all of them have used the company’s website. A surprising five percent of them use the company’s website every day.
In addition to an improved user experience that incorporates increased price transparency, Oscar heavily emphasizes telemedicine, with the goal of giving every customer the feeling of having “a doctor in the family.” As Forbes reported last year, Oscar has “a unique partnership with the telemedicine company TeleDoc” which will allow its customers to speak to a doctor at any time of the day or night without incurring any out-of-pocket expense. With regard to the physicians in Oscar’s network, Schlosser explained that the company does not intend to incentivize or require physician compliance with quality measures, as some insurers do. Oscar will instead use its core function—reimbursement—to encourage, for example, email and phone communications between doctors and their patients.
Oscar’s service area is currently limited to New York’s nine downstate counties, but the company hopes to expand. It will be very interesting to see if it succeeds. Schlosser emphasized that Oscar has its genesis in the frustration that he and his co-founders felt dealing with their previous employer-provided health insurance plans. He joked about Explanation of Benefits forms that do anything but explain what your benefits are. The idea of putting these frustrations in the rear-view mirror is a very attractive one.
In the end, Oscar’s success may hinge on its ability to earn and sustain its customers’ trust. Will customers trust the advice given to them by an insurance company-paid doctor whom they have never met? Will they trust that the physicians Oscar’s website directs them to are chosen with customers’ interests in mind? Will they trust Oscar to use the additional data it collects in ways that serve, or at least do not harm, their interests? An audience member’s joking reference to the National Security Agency—Schlosser responded with a smile that one of Oscar’s employees in fact did use to work for the NSA—suggests that I was not the only one thinking about questions of data and trust.
Filed under: Health Law, Information Technology
In collaboration with the Bergen County Prosecutor’s Office; 6 NJ/NY CLE credits. Click here for more information or to register.
Helen Oscislawski, Privacy Risk Assessments and Privacy Challenges
Helen Oscislawski is the founder of Oscislawski, LLC in Princeton. She provides legal guidance on HIPAA, HITECH, state privacy laws, electronic health information exchanges and health information technology to HIEs, RHIOs and ACOs, and counsels other healthcare clients in various matters.
Ms. Oscislawski was appointed by Governor Jon Corzine in 2008 to the New Jersey Health Information Technology Commission (NJHITC) and was reappointed to the NJHITC by Governor Chris Christie in 2010 where she also served as Chair of the Privacy and Security Committee for NJHIT Coordinator. She is the primary author of Update to Privacy and Security Compliance Manual, which was developed for the New Jersey Hospital Association and, most recently, she has developed and authored several editions of the HIPAA-HITECH Helpbook, a manual that combines tools and sample forms that address HITECH changes, state law and other considerations and Meaningful Use and Health Information Exchanges.
Before founding Oscislawski, LLC, Ms. Oscislawski was a healthcare attorney at Fox Rothchild in Princeton, New Jersey, where she counseled healthcare clients on a wide range of legal matters. She received her BA from Rutgers University, Douglass College and her JD from Rutgers School of Law.
Frank Pasquale, Professor of Law, Seton Hall Law School, The Past, Present and Future of Health Privacy
Professor Frank Pasquale is the Schering-Plough Professor in Health Care Regulation and Enforcement at Seton Hall Law School. Professor Pasquale has taught information and health law at Seton Hall since 2004. He has published over 20 scholarly articles. His research agenda focuses on challenges posed to information law by rapidly changing technology, particularly in the health care, internet, and finance industries.
Professor Pasquale is an Affiliate Fellow of Yale Law School’s Information Society Project. He has been named to the Advisory Board of the Electronic Privacy Information Center. He has served on the executive board of the Health Law Section of the American Association of Law Schools (AALS), and has served as chair of the AALS Section on Privacy and Defamation.
Professor Pasquale received his BA from Harvard University (summa cum laude), his M.Phil. from Oxford University, and his JD from Yale Law School.
Jaime S. Pego, Director, Healthcare Advisory Services, KPMG LLP, (along with Joy Pritts, Mark Swearingen, and Frank Pasquale, Moderator) Panel Discussion: The Practical Steps Necessary to Promote Privacy and Cybersecurity in Modern Healthcare Organizations
Jaime S. Pego is a Director in the Short Hills, New Jersey, office of KPMG LLP’s Healthcare Advisory Services Practice and serves as the firm’s National HIPAA Privacy Director. She has substantial experience in healthcare regulatory compliance and healthcare-related advisory services.
Ms. Pego works with a variety of healthcare clients to assist with identifying and preventing compliance risks and complying with federal and state regulations. Her work for KPMG includes serving as lead director for OCR HIPAA audits, as well as acting as Privacy Lead for the KPMG HIPAA national service line assisting covered entities and business associates with HIPAA compliance. She has conducted internal investigations concerning a variety of topics, including fraud and abuse, HIPAA violations, as well as other legal and regulatory matters, and researched and developed compliance policies for institutions in the areas of gifting under the Anti-Kickback Statute and Stark Law, the DRA, HIPAA, EMTALA and others. She participates in the KMPG National HIPAA working group to develop tools and methodologies for client needs, and conducts and manages ICD-10 Impact Assessment at a variety of healthcare organizations to help identify gaps in ICD-10 readiness. She has also served as the firm’s lead manager for health care reform legislative analysis and research.
Prior to coming to KPMG, Ms. Pego was a Local Compliance Officer at a teaching hospital and outpatient center for one of New Jersey’s largest health care systems and has worked with some of the country’s leading health systems. She received her BA from American University and her JD from Seton Hall University School of Law, with a Concentration in Health Law, and is Certified in Healthcare Compliance (CHC) by the Health Care Compliance Association (HCCA).
Joy Pritts, Chief Privacy Officer, ONC, HHS, Meaningful Use Regulations: What Providers Need To Know To Comply
Joy Pritts joined the Office of the National Coordinator for Health Information Technology (ONC), Department of Health & Human Services in February 2010 as its first Chief Privacy Officer. Ms. Pritts provides critical advice to the Secretary and the National Coordinator in developing and implementing ONC’s privacy and security programs under HITECH. She works closely with the Office for Civil Rights and other operating divisions of HHS, as well as with other government agencies to help ensure a coordinated approach to key privacy and security issues.
Prior to joining ONC, Ms. Pritts held a joint appointment as a Senior Scholar with the O’Neill Institute for National and Global Health Law and as a Research Associate Professor with the Health Policy Institute, Georgetown University. She has an extensive background in confidentiality laws including the HIPAA Privacy Rule, federal alcohol and substance abuse treatment confidentiality laws, the Common Rule governing federally funded research, and state health information privacy laws.
Ms. Pritts received her BA from Oberlin College and her JD from Case Western Reserve University.
Anna Spencer, Esq., Sidley Austin, LLP, Data Breaches/Data Breach Notification Requirements and the Need for Encryption
Anna Spencer is a partner in Sidley Austin’s Washington, D.C. office whose practice focuses on health care. Ms. Spencer primarily works on matters involving the privacy and security of health information and she is the firm’s global coordinator for health information privacy. She regularly counsels a broad range of clients on healthcare information privacy and security issues. This includes assisting clients with respect to HIPAA and HITECH and has significant experience in investigating and responding to data breaches and information security incidents. She has represented clients in connection with data breach reporting obligations under the HITECH regulations for breaches of protected health information and defended health care providers in investigations initiated by the Office of Civil Rights, Department of Health and Human Services.
On behalf of covered entities and entities that qualify as HIPAA business associates, Ms. Spencer has developed multiple HIPAA privacy and security compliance and training programs. She has negotiated hundreds of Business Associate Agreements on behalf of various clients.
Ms. Spencer has spoken on privacy/security matters on behalf of numerous groups such as BNA and the American Conference Institute. She has authored a variety of articles on privacy/security issues, Medicare coverage, and fraud and abuse. She is currently authoring a book for BNA on health information privacy. Ms. Spencer received her BA from Sewanee and her JD from Vanderbilt University School of Law.
Mark Swearingen, Esq., Hall, Render, Killian, Heath & Lyman, PC, HIPAA and HITECH Trends (Enforcement and Otherwise)
Mark Swearingen coordinates the HIPAA practice and provides counsel on health information privacy and security matters such as breach response and notification and the creation, use, disclosure, retention and destruction of medical records and other health information at the Indianapolis law firm, Hall, Render, Killian, Heath & Lyman, P.C. His counsel to clients also includes a variety of health care topics related to regulatory compliance, physician and clinical services contracting, risk management and Independent Review Organization services. He has provided such services to a broad spectrum of health system, hospital, physician practice, diagnostic imaging center, ambulatory surgical center and long-term care facility clients.
Mr. Swearingen has spoken and written nationally and regionally on numerous topics, including antitrust, electronic medical records and health information privacy and confidentiality. He is an adjunct professor of a course in Law and Medicine at the Indiana University School of Informatics at IUPUI.
Mr. Swearingen received his BA from Indiana University and his JD from Seton Hall Law School.
Health information law is a very exciting field. Lawyers, doctors, and start-ups are re-thinking health care as an information industry. I’ll be speaking on privacy and fair data practices at an upcoming conference. The relationships between privacy, “big data,” and trade secrecy will bear a great deal of attention in coming years.
Software-based automation has raised living standards dramatically. It makes factories more efficient, renders vast amounts of information accessible, and daily improves quality of life in barely noticed ways. To realize these types of advances in health care, government and NGOs have begun to catalyze better data collection, retention, and analysis. Life sciences companies need to report more data on drugs and devices. Hospitals and doctors are incentivized to use electronic health records via stimulus funding and rulemaking based on the HITECH Act’s meaningful use and certification requirements.
How will traditional intellectual property laws interact with these initiatives? Will the increasing need for cooperation and sharing of information alter the landscape of trade secrecy and other IP protections that have often siloed health data? Will providers find alternative funding sources for the collection, retention, and analysis of data, as some traditional IP protections appear increasingly outdated in a world of “big data” and market-driven transparency?
Medical privacy law has focused on assuring the privacy, security, and accuracy of medical data. The post-ACA landscape will include more concern about balancing privacy, innovation, access, and cost-control. Advanced information technology has raised a number of new questions. Beyond HIPAA and HITECH regulation, consumer protection law plays an important role in these fields. (For example, the FTC recently required firms that “score” the health status of individuals based on their pharmacy records to disclose these records to scored individuals.)
Patients are opting to personalize their health records with the help of cloud computing firms; what law governs this digital migration? There is increasing concern about the role of “incidental findings” in medical research and practice; how will regulators and professional groups address them? When employers demand access to employee health records, in what ways can they use them to profile the employee?
We also need to examine the legal aspects of data portability, integrity, and accuracy. When two health records conflict, which takes priority? What is “meaningful use” of an electronic health records system, and how will regulators and vendors assure interoperability between systems? The course will also cover innovators’ efforts to protect their health data systems using contracts, technology, trade secrecy, patents, and copyright, and “improvers’” efforts to circumvent those legal and technological barriers to openness.
Finally, what are pharmaceutical companies’ past and present strategies regarding the disclosure of their research, including non-publication of adverse results and ghostwriting of positive outcomes? Will a “reproducible research” movement, popular in the hard sciences, reach pharmaceutical firms? Insurer data will also be a target of reformers (including trade-secret protection of prices paid to hospitals, conflicts over the interpretation of disclosure requirements in the ACA, and state regulation of insurer-run doctor-rating sites). Quality improvement and pilot programs will need good provider and insurer data–how we will ensure they have them?
Filed under: Electronic Medical Records, Information Technology, Medical Malpractice
If one jumbo jet crashed in the US each day for a week, we’d expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that “If we could focus our efforts on just four key areas — failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism — and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year.” The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.
But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, “by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults.” While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is a cornerstone of intuitive learning. Parameswaram offers several passages from James Reason’s book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:
[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks.” In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.
A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. We will need to begin to recognize this regulatory apparatus as a “process of integrating human intelligence with artificial intelligence.” (For more on that front, the recent “We, Robot” conference at U. Miami is also of great interest.)
Another recent story emphasized the importance of filters in an era of information overload, and the need to develop better ways of processing complex information. Kerry Grens’s article “Data Diving” emphasizes that “what lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.”
[F]or the most part, [analysts] rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading. . . . [There is] an entire world of data that never sees the light of publication. “I have an evidence crisis,” [says Tom Jefferson of the Cochrane Collaboration]. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages. . . .
Clinical study reports . . . are the most comprehensive descriptions of trials’ methodology and results . . . . They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected. But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.
There will be a growing imperative to open up all of the data as concerns about the reliability of publications continue to grow.
[U]nlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability. The players sitting across the table from them — the health insurers — are not so profitable. In 2009, their profit margins were a mere 2.2 percent. That’s a signal that the sellers have the upper hand over the buyers.
I don’t agree that insurers are being bullied as buyers. If we’re going to bring up the financial sector, a better analogy would compare pay differentials between revenue-generating traders (providers) and the back office clerical and IT workers (insurers), rather than assume some common baseline of industrial profitability. The health care providers actually (try to) improve health; the insurers (are supposed to) support that primary effort. But overall, the story Klein tells here is broadly consistent with many other explanations of high prices in US health care.
What will solve that problem? Probably not health care reform, though regulators will struggle mightily to impose some discipline via IPAB and other entities. Followers of Clayton Christensen think pure technological innovation may wildly succeed where an oft-captured regulatory system is failing. Farhad Manjoo provides some empirical support for their hopes:
As computers get better, we’ll need fewer humans across a range of specialties. Look at mammography: One of the main ways radiologists can improve their breast diagnoses is by “double reading.” When two radiologists independently examine a collection of mammograms, the number of cancers detected increases substantially. A study published in 2008, however, found that a radiologist who uses ImageChecker can skip the second reading: A computer and a human are just as good as two humans.
[T]he doctors who are the juiciest targets for automation might not be the ones you’d expect. They’re specialists . . ., the most highly trained, highly paid people in medicine. It’s precisely these factors that make them vulnerable to machines. By definition, specialists focus on narrow slices of medicine. They spend their days worrying over a single region of the body, and the most specialized doctors will dedicate themselves to just one or two types of procedures. Robots, too, are great specialists. They excel at doing one thing repeatedly, and when they focus, they can achieve near perfection. At some point—and probably faster than we expect—they won’t need any human supervision at all.
Robots and automation are already taking on prominent roles in wars, factories, and political campaigns. The type of pattern recognition common to some medial specialties may be natural to them, particularly as electronic medical records and digitization take hold. Of course, an all purpose “physician robot” would be a much harder endeavor. In the context of a discussion of rationing, one health law textbook suggests that a mapping of possible interventions “would require rigorous scientific information on each of the almost 10,000 diagnostic entries in the International Classification of Diseases (9th ed.) (known as ‘‘ICD-9’’) and for each of the 10,000 medical interventions listed in the AMA’s Common Procedural Terminology (known as ‘‘CPT’’ codes).” ICD-10 has about 7 times more codes than ICD-9. But just as chess was once considered a field impenetrable to artificial intelligence, and now has been mastered by some computers, so too might medicine itself become subject to the exponential growth in information processing characteristic of mature digitized industries. It’s becoming clear that “the variety of jobs that computers can do is multiplying as programmers teach them to deal with tone and linguistic ambiguity.”
So will technology save us from ever-increasing health care costs? I’m not optimistic, because politics and economics are a constraint on all these developments. The same patterns of patronage and tribute that make comparative effectiveness research such a hard sell in the US may well restrain technology adoption. Just as specialists dominate the RUC, they can probably find ways to slow the adoption of technological substitutes for their hard-won expertise. As Umair Haque has observed, “In a neofeudal polity, patronage replaces meritocracy. ‘Success’ for an organization, coalition, or person is to become a client of a powerful patron, pledging your services (soft and hard, informal and formal), in perpetual alignment with the patron’s interests.” We’ll see many physicians in coming years invest time and effort in technological innovation, and others devoted to deterring its spread in order to protect current income streams.
At this point, you’re probably expecting me to side decisively with the technologists as heroes. But I can’t do so. I don’t buy an economic model premised on incentivizing innovation by setting off a race among radiologists (or, more realistically, financiers) to be the first to patent the machine that can replace all the other radiologists. Rather, I think the real foundation for radically productive innovation in this and other fields is a baseline of social support and commitment to retraining for professionals who could be displaced by the technology. I’m not saying, “pay radiologists what they make now, forever.” Rather, I’m trying to articulate a variant of a “guaranteed basic income” argument for those who invest heavily in learning about science, technology, and medicine. This baseline of educated users, improvers, and evangelizers of technology is the foundation of any venturesome economy. As Amar Bhide has explained,
[T]he different forms of innovation interact in complicated ways, and it is these interconnected, multilevel advances that create economic value. . . . To state the proposition in the terminology of cyberspace, innovations that sustain modern prosperity have a variety of forms and are developed and used through a massively multiplayer, multilevel, and multiperiod game.
We may well find that in decades to come, machines can do the jobs of radiologists and pathologists much better than people can. But if that transition occurs, it’s important to recognize how much current specialists invested to attain their skills, how hard they presently work to maintain a high level of medical skill in this country, and how future innovations may well dry up if people feel that those on STEM career paths are utterly vulnerable to being “kicked to the curb” once a machine does their job slightly better. Not only is “sole inventorship” a myth; we often fail to appreciate the complex educational and service apparatus necessary for innovation to take place. As Alperovitz and Daly have shown, any system that grants 93% of its gains to 1% of the people is an ongoing instruction in the economic futility of the efforts of the vast majority of its citizens.