Filed under: Conflicts of Interest, Drugs & Medical Devices, Pharma
When we think about health care reform, we need to remember that we have been attempting reforms through many avenues of myriad parts of our health system. The IRS revised Form 990 and schedule H in anticipation of the ACA; critics of conflicts of interest have been working on multiple fronts simultaneously. One of the challenges about all of these changes is how we measure whether they have made any positive difference.
The Accreditation Council for Continuing Medical Education (ACCME), which is the accrediting entity for Continuing Medical Education (CME), just released its annual report, which was much less interesting than I had hoped, as it is mostly a financial statement for the year’s CME activities. Nonetheless, it shows that industry grants to CME have declined from 50 to 30 percent of total CME income, which is attributed to the tremendous scrutiny CME has received over the years. Importantly, industry has not lost interest in medical conferences, as ad revenue from exhibits rose 7.2 percent to $296 million. You can read a summary of the report on Pharmalot or read the whole report.
While interesting, it’s hard to make out exactly what to conclude from this news about CME funding. First, it could be a response to the economy, but that’s belied by the increase in exhibit funding. And besides, pharma has historically been of the “you have to spend money to make money” mind. So, perhaps industry is indeed responding to the criticism about funding CME. Not mentioned is the possibility that CME sponsors have been turning away industry money, but that too is a possibility.
Some were concerned that if this happened, health professionals would be unable or unwilling to pay for their own CEUs. The total income for 2011 CME appears to be in line with prior years (especially given a methodology change adopted in 2011). While the number of physicians participating in CME was down slightly, that decline is consistent with a multi-year downward trend; the number of non-physician participants in CME is slightly up. Finally, as noted by Pharmalot, “other income, which includes registration fees paid by participants, rose 4.4 percent [in 2011] to nearly $1.2 billion.” While more than one year of experience will give a better picture, it seems fair to conclude so far that physicians are indeed willing to underwrite their own CME.
Most important to remember, however, is that the funding issue was merely a surrogate for the question of whether CME is biased either substantively or in subject matter coverage. I don’t think we really knew the answer yesterday, any more than the ACCME report enlightens us about what the answer is today. An annual report about CME in which I and others would really be interested would look at whether the subject matter of conferences has changed — are things being covered that weren’t before. Is comparative cost-effectiveness being addressed in presentations that address alternative treatments? Are real responses to racial health disparities being discussed? Is education being delivered to audiences comprised of interdisciplinary healthcare teams rather than the homogenous audiences found at many academy and similar meetings? Is CME delivery itself being studied to determine what learning methodologies are most effective? In short, if we can conclude that industry is listening to its critics by redirecting its funding, can we also infer that changes are occurring in response to other critiques of CME, such as those posed by the IOM report entitled “Redesigning Continuing Education in the Health Professions” and Seton Hall Law’s Whitepaper entitled “Drug and Device Promotion: Charting a Course for Reform?”
Presumably, it is a good thing to have less industry funding of CME — although we only see the change in the United States, not elsewhere . But it doesn’t get to the heart of the matter, which is the need for significant reform of CME generally. That’s the report I want to read.
The NIH’s Amended Conflict of Interest Regulations: A New, Weaker Approach to Intellectual Property Interests?
Yesterday, at long last, the National Institutes of Health released the final revisions to its regulations governing financial conflicts of interest on the part of applicants for federal research funds. And there is good news. The rule’s sunshine provisions have not, as was feared, been “gutted.” Grant recipients will have to make their investigators’ financial conflicts of interest publicly accessible. While an institution will not have to post the details of each conflict on its website, as was provided in the proposed regulations, if it does not it will instead have to provide the information in writing to anyone who asks for it. Academics, advocates, federal and state prosecutors, other regulators, and members of the news media will have the access they need. For sure, prospective patients or research participants will be less likely to come across information about investigator conflicts, but, as Kathleen Boozang explains here, it is far from clear that they would find such information helpful.
Of potentially more significance than the weakened sunshine provisions, the final regulations diverge from the proposed regulations with regard to the treatment of intellectual property. Under the prior regulations, investigators were required to inform their institutions about relevant intellectual property rights, including copyrights, patents, and royalties in excess of $10,000. The proposed regulations modified the definition to require disclosure of copyrights, patents, and royalties (and agreements to share in royalties) regardless of amount. Under the final regulations, investigators do not need to tell their institutions about their intellectual property rights and interests unless and until they are in “receipt of income related to such rights and interests.”
The preamble to the final regulations is somewhat confusing. For example, while the final regulations define significant financial interest to exclude intellectual property rights and interests that do not produce income, the agency states in the preamble that it “would expect institutional policies to require disclosure upon the filing of a patent application or the receipt of income related to the intellectual property interest, whichever is earlier.” The preamble also contradicts itself with regard to the applicability of the rule’s $5,000 threshold, stating at one point that the threshold “applies to licensed intellectual property rights (e.g., patents, copyrights), royalties from such rights, and agreements to share in royalties related to licensed intellectual property rights,” while explaining (correctly, I think) at another point that “the $5,000 threshold would apply to equity interests and ‘payment for services,’ which would include salary but not royalties.”
The NIH’s explanation of its addition of the “receipt of income related to such rights and interests” qualifier to the definition of a significant intellectual property right or interest is especially confusing. The agency writes that its intent was to exclude from the definition
“the rare cases when unlicensed intellectual property is held by the Investigator instead of flowing through the Institution,” because “it is difficult to determine the value of such interests.” The agency’s point about valuation may be true, but that is an argument in favor of disclosure not against it. With regard to equity interests, the final regulation requires investigators to disclose any equity interest in a non-publicly traded entity; the Food and Drug Administration similarly requires disclosure of equity interests “whose value cannot be readily determined through reference to public prices[.]“ The FDA also requires disclosure of any “[p]roprietary interest in the tested product,” without regard to value.
When an investigator has a proprietary interest in a product under study the potential exists for a serious conflict regardless of the interest’s current value or whether it is currently income-generating. Seton Hall Law’s Center for Health & Pharmaceutical Law & Policy and others have recommended a near-total ban on serving as an investigator in that case. Such a ban cannot, of course, be implemented unless investigators are required to tell their institutions about their proprietary interests.
In Health Choices: Regulatory Design and Processing Modes of Health Decisions, Orly Lobel and On Amir briefly summarize a fascinating series of experiments they have conducted with the support of the Robert Wood Johnson Foundation that test individuals’ ability to make decisions about their health in the face of cognitive depletion or overload. A longer version will be available in September, but the summary is well worth reading. Lobel and Amir begin by reviewing a line of research demonstrating, perhaps unsurprisingly, that “psychological depletion caused by a prior task” — in one study it was eating radishes while resisting cookies–leads to a reduced ability to exercise “executive control” and “persist in demanding cognitive activities.” As the authors note, applying this research to the context of health related decision-making is important because patients and providers alike are frequently asked to process information about relative risk and make reasoned, reasonable decisions under conditions of cognitive depletion or overload.
To test their hypothesis that “absent sufficient resources for executive functions individuals will take more risk in their [health-related] decisions,” Lobel and Amir conducted a lab experiment with approximately 700 participants and a web-based one with over 3000 participants, including 300 medical doctors. The findings from their studies support the conclusion that depletion affects people’s ability to process risk, albeit not in entirely intuitive or predictable ways. For example, when parents are cognitively depleted, they become more risk averse regarding vaccinating their children, but when policymakers are cognitively depleted, they become less risk averse regarding population-wide vaccination. When consumers are in a state of attention and focus, a long list of potential side effects will deter them from using a new drug. When cognitively overloaded, though, they paid “less attention to warning lists the longer they were.” The implications of Lobel and Amir’s work are many, varied, and vast; I am looking forward to reading the full paper when it comes out in September.
I also highly recommend Christopher Tarver Robertson’s Biased Advice, which was published in the Emory Law Journal earlier this year. Robertson conducted a series of experiments that built on the groundbreaking 2005 study by Daylian M. Cain, George Loewenstein & Don A. Moore evaluating the effect of a conflict of interest, and of disclosure of the conflict, on the quality of advice given by advisors regarding the number of coins in a jar and on the accuracy of advisees’ estimation of the number of coins in the jar. Cain and his colleagues’ most surprising finding was that when a conflict existed, disclosing it caused the accuracy of advisees’ estimates to decline, in part because advisors gave more biased advice when their conflicts were disclosed than when they were not disclosed.
Among other questions, Robertson examined whether a more concrete disclosure about an advisor’s bias–that is, that “prior research has shown that advisors paid in this way tend to give advice that is $7.68 higher on average than the advice of advisors who are paid based on accuracy”–would aid advisees. It did not, because advisees did not do what “one would hope and expect” and simply subtract $7.68 from the advisor’s estimate. Rather, it appears that they “used the bias disclosure not as a mechanism of calibrating their reliance more precisely, but rather as a strengthened warning suggesting that the advice is altogether worthless.”
On the other hand, disclosing to advisees that an advisor was paid based on the accuracy of the advisee’s estimate–i.e. that the advisor’s financial interest aligned with that of the advisee–led advisees to rely more heavily on the expert’s advice and, as a result, to more accurately estimate the number of coins in the jar. Disclosure of a conflict of interest is also likely to be valuable where advisees can seek out another advisor; a second, unconflicted, opinion dramatically increased the accuracy of advisees’ estimates.
Robertson’s research is important and interesting. As he observes, one of the reasons that it is so difficult to rein in health care costs is that “[t]he health care industry is characterized by radically distributed decision making, with each patient deciding upon her own course of treatment within the range of treatments offered by providers and covered by public and private insurers.” Improving individual decisions may be key to bending the cost curve. Robertson’s research suggests that a disclosure mandate could help under certain circumstances, where, for example, there is “epistemic charlatanism” and a physician’s disclosure of a financial conflict of interest would lead a patient to reject the physician’s not-so-expert recommendations. Robertson emphasizes, however, that disclosure does not improve layperson decision-making nearly as much as unbiased advice does.
…After the Horse Has Already Left the Barn: FDA Continues to Postpone Conflicts Review Until Studies Are Complete
On Tuesday, the Food and Drug Administration released a draft guidance on financial disclosure by clinical investigators, targeted at the investigators themselves, at drug and device companies and others who sponsor clinical trials, and at the agency staff who review the disclosures. In the draft guidance, which updates an earlier one, the FDA briefly reviews the financial disclosure regulations, which have not changed, and then provides heavily revised and expanded answers to frequently asked questions.
The draft guidance is a response to a January 2009 report by the Department of Health and Human Services’ Office of the Inspector General (OIG) which recommended that the FDA (1) “ensure that sponsors submit complete financial information for all clinical investigators[,]” (2) “ensure that reviewers consistently review financial information and take action in response to disclosed financial interests[,]” and (3) “require that sponsors submit financial information for clinical investigators as part of the pretrial application process.” The draft guidance addresses the first two recommendations but, unfortunately, FDA has still not taken action on the third.
The draft guidance responds to the OIG’s first recommendation in a number of ways, including in its response to the question “What does the FDA mean by due diligence?” which has grown from three sentences in the earlier guidance to four paragraphs in this one. The draft guidance sets forth in detail what those applying for marketing approval must do to obtain financial information from every investigator who worked on every clinical trial submitted in support of the application. For example, when an applicant is missing an investigator’s financial information because it cannot find him or her, it must try to locate the investigator by making at least two phone calls, sending at least two certified letters, and requesting new contact information from the investigator’s previous institutions. From there, the search might progress to contacting professional associations and conducting internet searches. The draft guidance’s recommendations, if followed, should drastically reduce the number of applications that rely on the due diligence exemption to excuse missing financial information.
With regard to the OIG’s second recommendation, the draft guidance adds and answers the following question: “What will FDA’s reviewers consider when evaluating the financial disclosure information?” In its answer, the FDA explains that “outcome payments (that is, payment that is dependent on the outcome of the study) elicit the highest concern, followed by proprietary interests (such as patents, royalties, etc.); but these are rarely seen.” More typical are equity interests and significant payments of other sorts, in which case the agency takes into consideration the amount and nature of the payment as well as other factors such as the total number of investigators and subjects in the study, whether and how the study is blinded, controlled, and randomized, and whether the study endpoints were objective or subjective. While the agency elsewhere rejects the idea that the financial disclosure requirements be waived for “efficacy studies that include large numbers of investigators and multiple sites[,]” it would appear to agree that the likelihood of a single investigator biasing such a study’s results is low.
The FDA has not taken action on the OIG’s third recommendation, that investigators’ financial information be submitted to the agency as part of the investigational new drug applications (IND) and investigational device exemptions (IDE) applications that are filed before studies in humans are initiated. The draft guidance does exhort sponsors to consult the FDA early and often to minimize potential bias. The draft guidance explains that “[b]y collecting the information prior to the study start, the sponsor will be aware of any potential problems, can consult with the agency early on, and can take steps to minimize any possibility for bias.”
When sponsors do choose “to consult the FDA early”, the draft guidance provides that agency staff should “focus on the protection of research subjects and the minimization of bias from all sources.” The suggestion that agency staff play a role in protecting research subjects is interesting. It is not mentioned in the regulations or anywhere else in the draft guidance and it is only possible where sponsors voluntarily seek the FDA’s input. By the time an applicant is required to turn over investigators’ financial information, as part of an application for marketing approval, the horse has left the barn. The clinical trials are complete and it is too late to protect participant’s rights and interests. Bias, by contrast, can sometimes be addressed retroactively. The draft guidance notes that the FDA’s “[r]eviewers might … compare results from more than one investigator, re-analyze the data excluding the investigator’s results, analyz[e] the data in multiple ways, and/or determin[e] if results can be replicated over multiple studies.” Even bias is better dealt with prospectively, though, not least because agency staff are aware of and sensitive to the expense associated with conducting clinical trials and are likely to be highly reluctant to disregard a trial’s results.
Because prospective review of investigators’ financial information would allow the FDA to “focus on the protection of research subjects and the minimization of bias” across the universe of studies, not just those in which the sponsor chooses “to consult the FDA early,” the financial disclosure regulations should be revised per the OIG’s recommendations to require that financial information be submitted as part of the pretrial application process.
Comments on the draft guidance are due by July 25, 2011.
Filed under: Conflicts of Interest, Research
Seton Hall University School of Law’s Center for Health & Pharmaceutical Law & Policy has issued a White Paper, “The Limits of Disclosure as a Response to Financial Conflicts of Interest in Clinical Research,” in which the Center agrees that public policy should encourage researchers and institutions to make information about their financial relationships with industry available to the public, but-contrary to many other commentators’ recommendations- concludes that disclosure of financial information should not routinely be required as part of the informed consent process.
While reiterating the Center’s prior recommendations for direct measures to eliminate, reduce, and manage problematic financial relationships in clinical research, the Center notes that, despite “the importance of transparency as an ethical value, incorporating financial issues into the informed consent process would provide few, if any benefits to research subjects and could in fact cause significant harms.”
The Center notes the problem of “information overload,” as clinical research informed consent documents have already become “long and complex, thereby confusing and overwhelming potential research participants,” and evidence indicates that “participants are often unable to sift through the morass of information to tease out the content they find salient or material.” In addition, qualitative studies have shown that “brief concise statements about financial interest within informed consent documents were rarely understood, and sometimes only served to confuse potential participants.
The Center concludes that, if a conflict of interest is so serious that its disclosure would lead a reasonable person to refuse to participate in a study, the proper remedy is to eliminate the conflict. It is therefore essential to ensure that information about financial interests is made available to institutional review boards (IRBs) and conflicts of interest committees, so that they can ensure that any problematic conflicts are eliminated before a study begins.
The Center notes that its conclusion that financial conflicts of interest should not be routinely disclosed as part of the informed consent process is not inconsistent with the California Supreme Court’s decision in Moore v. Regents of the University of California.
While Moore creates the possibility that, in the right set of circumstances, a physician’s failure to disclose research-related financial interests could give rise to liability, it does not mean that any and all financial relationships with industry must necessarily be disclosed. Rather, as in any informed consent claim, liability would depend on the plaintiff’s ability to establish the element of causation–i.e., that, if the omitted information had been disclosed, a reasonable person in the plaintiff’s position would not have consented to the procedure. As explained above, under the Center’s proposed framework, any conflict serious enough to affect a reasonable person’s decision about enrollment would already have been eliminated before the research began.
“The Limits of Disclosure as a Response to Financial Conflicts of Interest in Clinical Research” may be found at http://law.shu.edu/HealthLawPublications.
Seton Hall Law School’s Center for Health & Pharmaceutical Law & Policy is a think tank that fosters dialogue, scholarship, and policy solutions to critical issues in health and pharmaceutical law. As part of its mission, it convenes policymakers, consumer advocates, the medical profession, industry, and government in the search for concrete solutions to the ethical, legal, and social questions presented in the health and pharmaceutical arenas. The Center also runs a compliance training program covering the state and federal laws governing the development and marketing of drugs and medical devices.
Filed under: Drugs & Medical Devices, Physician Compensation
The Wall Street Journal recently published a report that outlines the extensive financial benefits that surgeons are receiving for spinal fusion surgeries. The “bounty” — a term used by the WSJ — comes from Medicare reimbursement as well as royalties for the intellectual property contributed by the surgeons to the spinal fusion procedures. Presumably the surgeons also receive money from speaking and training fees.
In short, spinal fusion is a surgical procedure whereby two or more vertebrae are fused together. The fusion is accomplished by creating a bridge between the vertebrae that is usually constructed out of bone taken from other parts of the body. The bone is inserted between the vertebrae, and is secured by rods, screws, and plates. This reduces the movement of each vertebrae that is connected to the bridge, thereby relieving stress on the injured vertebrae, disks, and nerves. Spinal fusion may be a necessary treatment in the face of trauma or debilitating diseases affecting the spine, such as scoliosis. However, the application of spinal fusion to treat certain types of back pain has been questioned.
In light of the dearth of comparative effectiveness research regarding nearly all surgical procedures, why then is spinal fusion so controversial? There appear to be two factors: the high price of the surgery, and the strong ties between the surgeons performing the spinal fusions and the medical device manufacturers that produce the hardware used in the procedure.
In particular, the royalty payments are staggering. In the first three quarters of 2010, the WSJ reports that each of five spinal surgeons at Norton Hospital in Louisville Kentucky received more than $1.3 million from Medtronic — the leading manufacturer of spinal fusion devices. Norton Hospital — and its surgeons — are certainly not alone in profiting from the procedures.
Though the device manufacturers like Medtronic pay out large sums to physicians that develop, utilize, and promote spinal fusion treatments, the manufacturers clearly come out ahead after taking into account the price they charge hospitals for the spinal fusion devices. Not surprisingly, this money often comes from Medicare reimbursement. According to the WSJ’s analysis of Medicare claims, spinal fusion went from costing Medicare $343 million in 1997 to $2.24 billion in 2008. And as the Journal points out, the screws used in spinal fusion implants can cost between $1,000 to $2,000 a piece for reimbursement but actually turn out to cost less than $100 to make. Spinal surgeon Charles Rosen is quoted as stating that “You can easily put in $30,000 worth of hardware in a person during a fusion surgery.” A Los Angeles Times report in 2010 found that complex spinal fusion surgeries can end up costing $80,888 in hospital charges as compared to $23,724 for spinal decompression surgery — the latter referring to a group of procedures that can relieve painful pressure on the spine, but without the extensive implantation required by spinal fusion.
Nevertheless, it is true that there there are many expensive surgical procedures wherein the value to the patient justifies the high price. But it is more than the wise allocation of resources that is at issue. In the Journal of the American Medical Association’s April 2010 issue, Dr. Eugene Carragee M.D., professor of spinal disease and orthopaedic surgery at Stanford University School of Medicine summarized the clinical difficulties facing complex spinal fusion surgery, especially in older individuals:
…the complex reconstruction of spinal deformity in older patients remains a difficult and dangerous enterprise. Complication rates have declined but remain concerning (30%-40%) and the reoperation rates, in a population for whom there is a high risk of both medical and anesthetic complications with additional surgery, remains at 10% to 20% in the most optimistic reports. Moreover, despite these major interventions, this approach is still not effective in 30% to 40% of patients.
When asked about the need for spinal fusion surgeries, Dr. Steven Glassman — one of the Norton Hospital surgeons that has received millions to implant spinal fusion devices — stated that he and his colleagues were “leaders among spine surgeons nationally in comparative effectiveness research.” This is a troubling statement, precisely because of the significant royalty agreements between Dr. Glassman and Medtronic that are described in the WSJ report. Dr. Glassman is therefore incentivized — at least economically speaking — to interpret research findings in such a way that maximizes the contexts in which spinal fusion surgery can be recommended.
To combat these conflict of interest claims, Medtronic claims that it refrains from paying out royalties to the collaborating surgeons on the devices they personally use in their patients. This would appear to reduce the incentive for Dr. Glassman to personally churn out spinal fusion operations in the hope that he will get royalties for those instances where he implants hardware in which he holds a royalty agreement with Medtronic. This certainly helps to combat violations of the Federal Anti-Kickback Statute, which prohibits Medtronic from inducing surgeons to purchase their devices. But this policy does little to curb the general conflicts of interest of the general spinal surgery community when determining whether to recommend complex spinal fusion surgery. Even if a contributing surgeon does not receive royalty payments for the specific surgeries where his manual contributions are utilized, they still have an incentive to keep Medtronic “happy” by increasing demand for the spinal fusion hardware. And certainly one can envision a scenario in which a surgeon so situated might suggest such a surgery but then refer the procedure itself to a colleague, thereby allowing the royalty payments to flow unencumbered by the guise of propriety. What does appear certain is that demand for complex spinal fusion operations has increased. Citing a study by Deyo and colleagues in the same JAMA issue, Dr. Carragee points out that:
…the rate of spinal stenosis surgery in the Medicare population has remained more or less stable, but the rate of complex surgery for this disease has increased from negligible levels in 2002 to nearly 15% of all spinal stenosis surgeries in 2007. These more complex surgeries are also reported to be independently associated with increased perioperative mortality, major complications, rehospitalization, and cost.
The findings do not provide explanations for the increase in complex surgery that has occurred during the past 6 years. Ideally, because the complex surgical techniques are used to treat complex deformities, the data should show that patients undergoing these procedures usually have these complex deformities. The diagnoses reported, however, do not support this “ideal” explanation; 50% of these new complex fusion operations were performed in patients with spinal stenosis alone and no deformity. Spinal stenosis with scoliosis by coding, accounted for only 6% of the complex fusions performed.
In other words, there has been an increase in the rate of a complex surgical procedures prone to severe complications, but with no concomitant increase in the rate of the severe conditions that would ostensibly warrant such surgery. Regardless, this demand pays dividends in the royalty agreements that the surgeons receive when the U.S. spinal surgery community implant the hardware that the contributing surgeons developed– and dividends to the manufacturer.
Currently, there appears to be little, if any, countervailing force that militates against the doctors recommending this complex and expensive procedure. By conducting the complex fusion operation, the surgeon and the hospital both make money through the handsome reimbursement from Medicare and private insurance, while Medtronic is handsomely paid by selling more devices. Those hurt, financially speaking, are the taxpayers in terms of Medicare, and those insureds in private plans whose premiums have risen because of the increased costs of this procedure. Private insurance plans are unable to combat this, as any limit on spinal fusion surgery will be framed as corporate greed coming at the expense of treatment. This is precisely what has occurred after Blue Cross and Blue Shield of North Carolina announced that it would place tighter restrictions on spinal fusion surgery. In response to the restrictions, Dr. John Wilson, a neurosurgeon at Wake Forest University Baptist Medical Center and president of the North Carolina Neurological Society stated that “If this intrusion into the physician-patient relationship goes unchallenged, other insurers will follow suit…It will be a progression of ever-more restrictive policies that will handcuff us as we try to treat patients.” Dr. Wilson was one of nine physicians to write a letter to Blue Cross urging the insurer to alter the new rules. Interestingly, the letter repeatedly supports its position by citing the studies of Dr. Daniel Resnick, a spine surgeon who is listed by the Congress of Neurological Surgeons as receiving grant money from Medtronic.
1. ProPublica details the incessant problem that medical schools face in preventing their faculty from accepting money in exchange for speaking on behalf of pharmaceutical companies. As previously noted on this blog, these conflicts of interests are in addition to those conflicts found in spinal surgery and cardiac stenting.
2. For the New England Journal of Medicine, Michael E. Porter introduces two recently published papers that explore the concept of value in health care.
3. The Commonwealth Fund provides a summary of a briefing on the ACA’s initiatives to reform primary care. A full video of the briefing (which was co-hosted with the Alliance for Health Reform), as well as a podcast of the audio, can be found here.
4. The Health Care Blog has a nice bulleted Year in Review for Health Information Technology (HIT), including topics such as the HITECH Act, E-prescribing, EHRs, and Health Information Exchanges.
5. The New York Times discusses a new Medicare rule that will cover the costs of voluntary end-of-life treatment planning.
Filed under: Conflicts of Interest, Drugs & Medical Devices, Fraud & Abuse
A Museum of Modern Art exhibit by Michael Burton once proposed that human beings themselves would be the soil for a “future farm:”
Future Farm predicts that the emerging pharmaceutical research in harvesting adult stem cells from fat tissues and its convergence with future nanotechnologies, will bring with it scenarios that reconsider the body as income. We live in a world where industries exist to offer financial rewards for those willing to sell a kidney or produce hair to beautify others. Industries have grown to facilitate transplant tourism as a result of the success of contemporary surgery. And scientific and technological advances continue to bring new possibilities for the practice of farming the body.
This may seem like an overly dramatic or even science-fictionalized description of desperation due to poverty and larger economic trends. But the global economic race to the bottom has now so influenced medical research that Burton’s dark vision is coming closer to realization.
A recent article by Bartlett & Steele and a book by Carl Elliott describe the rise of “contract research organizations” that organize the initial phases of drug trials. Bartlett and Steele choose a provocative metaphor to describe the trend:
To have an effective regulatory system you need a clear chain of command—you need to know who is responsible to whom, all the way up and down the line. There is no effective chain of command in modern American drug testing. Around the time that drugmakers began shifting clinical trials abroad, in the 1990s, they also began to contract out all phases of development and testing, putting them in the hands of for-profit companies.
It used to be that clinical trials were done mostly by academic researchers in universities and teaching hospitals, a system that, however imperfect, generally entailed certain minimum standards. The free market has changed all that. Today it is mainly independent contractors who recruit potential patients both in the U.S. and—increasingly—overseas. They devise the rules for the clinical trials, conduct the trials themselves, prepare reports on the results, ghostwrite technical articles for medical journals, and create promotional campaigns. The people doing the work on the front lines are not independent scientists. They are wage-earning technicians who are paid to gather a certain number of human beings; sometimes sequester and feed them; administer certain chemical inputs; and collect samples of urine and blood at regular intervals. The work looks like agribusiness, not research.
Because of the deference shown to drug companies by the F.D.A.—and also by Congress, which has failed to impose any meaningful regulation—there is no mandatory public record of the results of drug trials conducted in foreign countries. Nor is there any mandatory public oversight of ongoing trials.
Therefore, it is up to journalists like Bartlett & Steele to uncover problems. And they are legion:
The Argentinean province of Santiago del Estero, with a population of nearly a million, is one of the country’s poorest. In 2008 seven babies participating in drug testing in the province suffered what the U.S. clinical-trials community refers to as “an adverse event”: they died. . . . In New Delhi, 49 babies died at the All India Institute of Medical Sciences while taking part in clinical trials over a 30-month period. . . . In 2007, residents of a homeless shelter in Grudziadz, Poland, received as little as $2 to take part in a flu-vaccine experiment. The subjects thought they were getting a regular flu shot. They were not. At least 20 of them died.
Bartlett and Steele also discuss problems in research in the US. Exploitation probably should not be a surprise in a country where unpaid prison labor appears to be a strategy to boost productivity. US companies are also driving the “initial stages of distributed human computing that can be directed at mental tasks the way that surplus remote server rackspace or Web hosting can be purchased to accommodate sudden spikes in Internet traffic.” (Such “human intelligence tasks” can be purchased for as little as a penny each on Amazon’s Mechanical Turk.) But the slow infiltration of less developed countries’ standards into US drug testing should be a concern for the FDA.
The system also appears to give drug companies a wide latitude to manipulate results, leading to the rise of “rescue countries” that are particularly prone to produce positive results:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. . . In 2004—on April Fools’ Day, as it happens—the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey. The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data.
Massive global inequalities render populations around the world vulnerable to exploitative testing conditions.
Carl Elliott’s book White Coat, Black Hat covers similar terrain, as well as the conflicts of interest and other issues we’ve addressed at Seton Hall’s health law center. His review of recent books on medical research described a “mild torture economy.” His piece “Guinea Pigging” suggests that “rescue counties” in the US may complement the “rescue countries” of Bartlett and Steele:
This unit was in a university hospital, not a corporate lab, and the staff had a casual attitude toward regulations and procedures. “The Animal House of research units” is what [one research subject] calls it. . . . Although study guidelines called for stringent dietary restrictions, the subjects got so hungry that one of them picked the lock on the food closet. “We got giant boxes of cookies and ran into the lounge and put them in the couch,” Rockwell says. “This one guy was putting them in the ceiling tiles.” Rockwell has little confidence in the data that the study produced. “The most integral part of the study was the diet restriction,” he says, “and we were just gorging ourselves at 2 A.M. on Cheez Doodles.”
Elliott’s litany of poorly controlled or ramshackle studies gives us one more item to add to Dr. John Ioannidis’s many reasons for doubting medical research:
Ioannidis [has] laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. . . .
When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X . . . But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. . . .[S]tudies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe. . . .
And these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge). . . .If a study somehow avoids every one of these problems and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.
To summarize: Ioannidis casts some doubt on even the best of studies, and Elliott, Bartlett, and Steele show that bad studies may be far more common than we suspect. It’s a troubling set of observations for all concerned. We should at the very least insist on much more systematic monitoring of global drug trials.
A Well Placed Question by Professor Mirkay: “Should Medical-Related Charities Increase Disclosure of Their Donors?”
Filed under: 501(c)(3), Health Reform, Transparency
We’ve written a great deal here at HRW about the need for transparency in industry/profession interactions and the elimination of conflicts of interest–the Center for Health & Pharmaceutical Law & Policy here at Seton Hall Law has, in fact, over the course of the last two years, issued two White Papers on the subject–with another on the way. In the last, “Conflicts of Interest in Clinical Trial Recruitment & Enrollment: A Call for Increased Oversight,” the Center proposed legal and policy changes to address conflicts of interest in the relationships between industry and doctors that can create unwarranted risks to trial participants and to the scientific integrity of research. In the Paper prior, ”Drug and Device Promotion: Charting a Course for Policy Reform,” The Center recommends: (1) making payments by drug and device companies to doctors transparent, with public disclosure by industry and physicians of their financial relationships; (2) adopting federal legislation to ban gifts, meals and other benefits provided to doctors as part of the current marketing model; (3) setting new policies to give FDA the authority to require studies of safety and efficacy of drugs and devices used off-label; and (4) undertaking a fundamental change in funding for continuing medical education to end industry support.
But over at Nonprofit Law Prof Blog, Professor Nicholas A. Mirkay of Widener University School of Law, has a post–and an additional question–well worth considering:
Professor Mirkay points to a recent Chronicle of Philanthropy article which raises the issue as the National Alliance of Mental Illness (NAMI) has begun disclosing the names of corps and foundations who (does Citizens United make that “who” correct? Never mind appropriate.) donate more than $5,000. NAMI is said to have done so on the heels of an investigation by Senator Chuck Grassley into their financial relationship with the pharmaceutical industry. Mirkay writes:
NAMI’s actions have given Grassley further impetus to force 33 other nonprofit medical associations to follow NAMI’s lead. In a related article, the Chronicle reports that Grassley’s inquiry into these other groups represents a “broader effort by the senator and others to expose and curtail corporate influence on the medical field.” Grassley commented that “[t]hese organizations have a lot of influence over public policy, and people rely on their leadership. There’s a strong case for disclosure and the accountability that results.”
Professor Mirkay also writes
In December 2009, Grassley sent a letter to 33 such nonprofit associations requesting information on the amount of funds received from pharmaceutical, medical-device and insurance companies from 2006 to 2009, the identity of the donors and how their money was spent by the medical group, and additional information on the outside income earned by the groups’ top executives and board members.
The (partial) results of those queries are not particularly heartening, but are certainly worth considering. Mirkay writes:
The Chronicle acquired more than half of the solicited groups’ responses to Grassley’s letter, finding that such groups receive aggregately more than $100 million annually from medical-related companies via “donations, advertising revenues, exhibit fees, corporate memberships, and support for continuing medical education.” For some groups, this can represent as much as 78% of their revenue, while for others it only represents a small percentage of their total receipts.
Despite the longings of Elvis Costello, it’s hard to bite the hand that feeds you–and 78% of revenue pretty much constitutes (in)visible means of support. In pushing further with our (or more accurately, the Supreme Court’s) Citizens United “who” conceit, one might think 78% sufficient in some sense to constitute dependent status under the tax code–at least for purposes of context. Having said that, in addition to not biting, it’s not hard to imagine the dependent regularly fed doing that which it may to help assure the continued regularity of that feeding. Especially if the feedings are invisible.
It should also be noted that Mirkay rightly points out that “This effort is further evidence of Grassley’s commitment to increased transparency of tax-exempt nonprofits.” He’s right. And being that Senator Grassley follows HRW on Twitter, and as I have at times been critical of some of his positions in the past regarding other issues, it’s worth noting that the Senator should be roundly applauded for his efforts.
[And if you haven't been over to the Nonprofit Law Prof Blog, you should. It's in our blog roll for good reason-- their work is informative, brief and well written.]
Shine On: Lessons for Disclosure of Industry Payments to Physicians from New Empirical Research on Sarbanes-Oxley’s Conflict of Interest Disclosure Requirement
In Placebo Ethics: A Study in Securities Disclosure Arbitrage, an interesting (and very readable) article published earlier this year in the Virginia Law Review, Usha Rodrigues and Mike Stegemoller present the results of their empirical research into Section 406 of the Sarbanes-Oxley Act of 2002, which “requires companies to disclose their codes of ethics (or explain why they do not have them), and then to disclose any waivers from that code granted to top corporate officers.” Briefly, Rodrigues and Stegemoller found that “at least for related-party transactions, firms regularly engage in a kind of ‘disclosure arbitrage,’ neglecting to disclose ethics waivers at the time when transactions occur (in violation of Section 406 of Sarbanes-Oxley),” but complying with a regulation requiring disclosure of such transactions in their year-end proxy statements. Rodrigues and Stegemoller’s observations about Section 406 have relevance beyond securities law including (and of particular interest to readers of this blog) to recent efforts to regulate conflicts of interest in medicine through disclosure of relationships between physicians and the pharmaceutical and medical device industries.
For several years, a small number of states have required drug and device companies to report their relationships with physicians practicing in those states. Section 6002 of the Patient Protection and Affordable Care Act takes it to the next level, requiring “drug, device, biological, or medical supply” companies to report all of the payments they make to physicians and teaching hospitals in all of the 50 states. The Secretary of Health and Human Services is required to make the payment information public “through an Internet website,” in a form that is clear, understandable, and searchable, and in a format that is easily aggregated and downloaded. While drug and device companies do not need to submit their first reports under PPACA until March 31, 2013, those reports are to include all payments made to physicians and teaching hospitals in 2012. As a result, drug and device companies are hard at work right now putting systems in place to accomplish the information gathering and organizing that nationwide reporting will require.
In a number of important ways, the disclosure regime established by PPACA comports with Rodrigues and Stegemoller’s findings and recommendations. First, having found that disclosure via company websites (as is allowed under Section 406 of Sarbanes-Oxley) has a number of downsides, they recommend that disclosure be made through EDGAR, the SEC’s consolidated, easy-to-use, indefinitely accessible database. As Duff Wilson reports here, several companies have begun disclosing the payments they make to physicians on their own websites and downsides similar to those pointed out by Rodrigues and Stegemoller have been noted. For one, a patient interested in learning more about a given doctor’s relationships with industry would have to search each company’s website individually and then compile the results. The companies have not made this easy to do; most use formats that make it very difficult to aggregate or analyze the data they report. PPACA’s HHS-run website will solve these problems.
Relatedly, Rodrigues and Stegemoller found that, given the chance, companies will choose to bury “unsavory related-party transactions” “in the rubble of sundry disclosures.” This pitfall, too, should be avoided under PPACA’s disclosure regime. (If anything, the statute is too lean-and-mean, providing that payments be labeled with bare descriptors like “consulting fees” and “gift.”)
Finally, Rodrigues and Stegemoller suggest that one of the problems with Section 406 of Sarbanes-Oxley is that it sets forth a “soft” disclosure requirement; a company is permitted to determine for itself what its code of ethics permits or does not permit and, a fortiori, under what circumstances a (disclosable) waiver of that code will be required. Predictably, this leads to “companies evad[ing] illegality by watering down their codes to such a degree that they no longer forbid the very Enron-style conflicts of interest that led to the adoption of Section 406.” Section 6602 of PPACA, by contrast, sets forth a “hard” disclosure requirement. Companies have to disclose all payments, not just those that they have determined create a conflict of interest.
There is one concern raised by Rodrigues and Stegemoller with regard to Section 406 of Sarbanes-Oxley that may apply to Section 6602 of PPACA — the problem of enforcement (or lack thereof). They note that “the basic consequence of underenforcement is the imposition of disclosure requirements on paper that are ignored in real life.” Overlapping disclosure requirements (such as those that Rodrigues and Stegemoller exploited to conduct their research) are one way to determine whether required disclosures are made. With regard to physician payments, a valuable cross check would be provided by the draft Public Health Service conflict regulations’ requirement that any significant financial interest that (1) is still held by a principal investigator or senior/key person, (2) is related to government-funded research, and (3) is a financial conflict of interest must be disclosed to the public via the world wide web; the disclosures that physician-investigators must make to medical journals will also serve this function.
Filed under: Conflicts of Interest, Continuing Medical Education
Last week’s Journal of the American Medical Association (JAMA) reported on the challenges that certain medical schools and medical centers across the country are facing as they decrease or eliminate industry funding of their continuing medical education (CME) programs. These institutions have shown concern over the potential conflicts of interest when pharmaceutical and medical device companies fund educational programs which could bias future prescribers/customers toward their targeted products. The adoption of industry-free CMEs could help filter out any potential marketing messages, and leave behind a balanced and evidence-based perspective. After all, as the New York Times has reported, there are over 700 accredited CME providers in the United States and CME spending hovers around $2.5 billion per year, nearly half of which is paid by pharmaceutical and medical device companies.
Last year, three medical schools declined industry support for their CMEs: the University of Missouri-Kansas City School of Medicine, Nova Southeastern University College of Osteopathic Medicine, and Touro University Nevada College of Osteopathic Medicine. This past June, the University of Michigan Medical School (UMMS) announced an actual policy change, effective January 1, 2011, whereby the school will no longer accept industry funds, which presently comprise almost 45% of its total CME funding. UMMS believes contributions from various departments will help offset this sizable loss, as will higher CME registration fees and “less glamorous venues.” UMMS is the first medical school to introduce such a policy and does so noting “we should take pride in our position as a national leader on this issue.”
This is all well and good, but in light of the enormous industry contributions, the $64,000 question really becomes: “Is Industry-free CME a Sustainable Model?“ And that’s exactly what speakers and attendees asked at the June 25, 2010, PharmedOut Conference entitled “Prescription for Conflict: Should Industry Fund Continuing Medical Education.” The conference was held at Georgetown University (PharmedOut is a Georgetown University Medical Center project funded by the Attorney General Consumer and Prescriber Education grant program. Its team of physicians and academics lecture on the physician-pharma relationship, and provide access to online and industry-free CMEs). Dr. Robert Wittes, Physician-in-Chief at Sloan-Kettering’s Memorial Hospital for Cancer and Allied Disease (“Memorial Hospital”), told his colleagues during the Conference that:
[t]here is life in CME after you do something like this. But you have to be willing to prioritize the activity, such as putting institutional funds toward the balance [previously covered by commercial funds] and/or charge registration fees for CME activities that involve outside physicians.
Memorial Hospital stopped accepting industry money for its CMEs in 2007. Dr. Wittes acknowledged that “[w]e don’t have these things in hotels in mid-Manhattan anymore; we have them on our own premises.” Yet he cautioned against any institution from completely severing ties with the industry because there are positive interactions which can result in improved products and commercial science.
For further reading, check out Seton Hall Law’s Center for Health & Pharmaceutical Law & Policy‘s whitepaper on “Drug and Device Promotion: Charting a Course for Policy Reform.” The Center makes several recommendations for overhauling the CME funding mechanism. It also points out that accountants, lawyers, and other professions pay for their continuing education programs. Be sure to check out Michael Ricciardelli’s post on industry funding of CMEs for nurse practitioners and Kate Greenwood’s post on ACCME Standards for Commercial Support too.
Center for Health & Pharmaceutical Law & Policy Submits Comments on Conflicts of Interest in Research to the National Institutes of Health
Filed under: Conflicts of Interest, Health Reform
On August 19, 2010, on behalf of Seton Hall Law’s Center for Health & Pharmaceutical Law & Policy, Seton Hall Law Professors Kathleen Boozang and Carl Coleman, along with Research Fellow Kate Greenwood, submitted comments on the National Institutes of Health’s proposed revisions to its regulations governing conflicts of interest in federally-funded research. While the Center’s November 2009 White Paper Conflicts of Interest in Clinical Trial Recruitment & Enrollment: A Call for Increased Oversight endorsed limits on conflicts of interest beyond those that the NIH has proposed, the revised regulations are a step in the right direction and in its comments the Center commends the NIH for its decisive action on this issue.
Briefly, the Center:
- Supports the NIH’s proposal that that researchers disclose to their institutions any significant financial interest that “reasonably appears to be related to the Investigator’s institutional responsibilities,” with “institutional responsibilities” defined to include “activities such as research, research consultation, teaching, professional practice, institutional committee memberships, and service on panels such as Institutional Review Boards or Data and Safety Monitoring Boards.” This comports with the Center’s recommendation in the White Paper that investigators not be charged with determining for themselves whether one or more of their financial interests could be affected by a specific research project.
- Supports the NIH’s decision to significantly lower the monetary threshold at which a researcher’s financial interest becomes “significant” to $5,000, but argues that a lower threshold would be better. Collection of data about all of a researcher’s relationships with industry, even those that fall below the proposed $5,000 threshold, would facilitate better conflict of interest assessment and management and make possible research into the effects of conflicts on research integrity and human subject welfare.
- Supports the NIH’s decision not to exclude income from non-profit entities for lectures and similar engagements from the definition of significant financial interest and its conclusion that any equity interest in a non-publicly traded entity is significant, as are any and all intellectual property rights, but encourages the agency to revisit its decision to shield from disclosure (1) equity interests held by investigators in commercial or for-profit institutions and (2) royalties and other remuneration other than salary paid to an investigator by an institution that appoints or employs him or her.
- Notes that the draft revised regulations do not address the White Paper’s criticisms that the conflict of interest regulations place no “substantive limits on the kinds of conflicts that may exist” and fail to put forth “a required minimum response for conflicts that pose the greatest risks to participants and the integrity of the research” and encourages the NIH to consider again the benefits of setting forth required minimum responses to those conflicts that are the most problematic.
- Supports the NIH’s decision to require that grantees provide “sufficient information to enable the [agency] to understand the nature and extent of the financial conflict, and to assess the appropriateness of the Institution’s management plan.”
- Supports the requirement in the draft revised regulations that any significant financial interest that (1) is still held by a principal investigator or senior/key person, (2) is related to PHS-funded research, and (3) is a financial conflict of interest must be disclosed to the public via the world wide web.
- Supports the draft revised regulations’ requirement that investigators complete training on “the Institution’s policy on financial conflicts of interest, the Investigator’s responsibilities regarding disclosure of significant financial interests, and of these regulations” before the commencement of research and then at least once every two years. As recommended in the Center’s White Paper, it would be beneficial for the training to include as well a discussion of the nature of conflicts of interest and their potential for harm.
- Recommends that the agency adopt its own suggestion that institutions be required to “maintain up-to-date, written, enforced policies” on institutional conflicts of interest, as they are for investigator conflicts, and that these policies be made publicly available via the world wide web. The nudge this requirement would provide is necessary because institutions have been slow to develop and adopt policies on institutional conflicts.
- Recommends that the section of the regulations devoted to remedies be revised to include a non-exclusive list of potential enforcement actions such as temporary withholding of cash payments pending correction of the deficiency, suspension or termination of the contract or grant in whole or in part, monetary assessments and penalties, and suspension or debarment from eligibility for future contracts or grants.
The Center’s comments in their entirety are available here.
Seton Hall Law School’s Center for Health & Pharmaceutical Law & Policy. The Center is a think tank that fosters dialogue, scholarship, and policy solutions to critical issues in health and pharmaceutical law. As part of its mission, it convenes policymakers, consumer advocates, the medical profession, industry, and government in the search for concrete solutions to the ethical, legal, and social questions presented in the health and pharmaceutical arenas. The Center also runs a compliance training program covering the state and federal laws governing the development and marketing of drugs and medical devices.
Recommended Reading, “Regulating Conflicts of Interest in Research: The Paper Tiger Needs Real Teeth”
Jesse Goldner’s Regulating Conflicts of Interest in Research: The Paper Tiger Needs Real Teeth, 53 St. Louis U. L.J. 1211 (2009), is a must-read for anyone who has anything to do with oversight of researchers’ conflicts of interest. The article reflects an insider’s understanding of academic physicians’ perspectives on this still-contentious topic, provides a terrific survey of the literature, and proposes regulatory fixes by the feds that HHS will hopefully seriously consider. The article’s timing is perfect, given that HHS is receiving comments until August 19, 2010 on proposed changes to its conflict of interest regulations. See http://grants.nih.gov/grants/policy/coi/. Even in the short time since the publication of Goldner’s article, HHS OIG has issued yet another report on conflicts of interest management, entitled “How Grantees Manage Financial Conflicts of Interest in Research Funded by the National Institutes of Health,” (Nov. 2009), available at http://oig.hhs.gov/oei/reports/oei-03-07-00700.pdf. Based upon an in-depth audit of 41 grantee institutions that reported conflicts in FY 2006, the OIG found that equity interests represent the most pervasive form of financial conflict of interest. The most popular tool employed by entities managing conflicts is disclosure to publications or at academic presentations; entities only rarely required the reduction or elimination of conflicts. As important, and unsurprising based upon AAMC surveys, is the unreliability of the conflict reporting mechanisms used by most academic institutions.
The OIG report emphasizes the need for increased oversight of conflicts of interest. Academic medical centers have had plenty of time and forewarning to address the issue but, as demonstrated by a vignette described by Goldner about his own efforts to accomplish this through the IRB which he chaired, faculty resistance is significant. Consequently, Goldner is exactly right in calling upon HHS to issue aggressive regulations that accomplish the necessary reforms. He would require the establishment of conflict of interest committees at every research institution, comprised primarily of independent members, to which faculty would report all financial relationships that create conflicts of interest. Resolution of such conflicts would be a condition precedent to proceeding with proposed research, and violations would result in significant penalties, including debarment from research.
As shall be discussed in a forthcoming Seton Hall White Paper entitled The Limits of Disclosure as a Response to Conflicts of Interest in Clinical Research, I do not have confidence in benefits accruing from requiring disclosure of conflicts to research participants in consent forms, although research participants do have a right to know of such conflicts. This is a minor quibble. Goldner’s article is a great contribution to the literature.
[Ed. Note: We are pleased to welcome Professor Gaia Bernstein to Health Reform Watch. Articles about her recent scholarship, "Over-parenting," may be found at the ABA Journal and The New York Times Magazine.]
Genetic testing for adult onset diseases used to be mainly a medical service. In most cases a person who had a certain genetic disease that was prevalent in her family would go to test to see if she carries the genetic mutation. For example, a woman who had several cases of breast cancer in her family would test for the breast cancer genetic mutation BRCA1/BRCA2 to see if she carries the mutation and has a high probability of getting the disease. But, the proliferation of direct to consumer genetic testing changes the nature of the service to a consumer service. Companies like 23andme and Pathway Genomics (who was planning to start selling its kits in Walgreens) offer consumers the option to buy packages of tests (ranging from 25 to over a 100 conditions). Consumers often buy the tests to satisfy their curiosity or they may even receive them as a gift. People purchasing the testing packages usually do not consult a medical professional when deciding to undergo the tests and receive the results alone by accessing a website.
Yesterday I spoke before the FDA, which is considering regulating direct to consumer genetic testing. My presentation was based on a symposium piece I am working on. I argued for the need for a medical professional to guide people throughout the process and advise them not just on the interpretation of the results but also earlier in the process to determine what genetic information they actually want to have.
Interpreting the results of genetic tests is not easy. Unlike other over the counter tests, like a pregnancy test, which gives a clear positive or negative result, genetic tests are about probabilities. Even a person who tests positive for a certain mutation may still not get sick depending on other non-genetic factors. People have a hard time understanding the results of genetic tests and for that reason there have been many calls to require the guidance of a medical professional for the delivery of the results.
But I believe focusing on the interpretation of the results is only half the issue. It is important to have professional guidance also at the outset to determine what tests to undergo. A medical professional should guide individuals and tailor the panel of tests to the individual who desires to test. Why is that? Well, first of all, some people, if they get a chance to give it some thought, may not want to know all their genetic information. For example, a person may prefer not to know that he is likely to get Alzheimer’s at a young age. Secondly, not all genetic information is made equal. Some genetic tests do not convey that much useful information. For example, a positive result in some tests may only demonstrate a slightly higher likelihood of getting the disease than the probability in the general population. Eliminating such tests at the outset will facilitate the interpretation of the results. It would be possible to focus on the truly important positive results at the end of the process.
To achieve all this it is important for the law to require the guidance of a medical professional who is not a representative of the genetic testing company. A medical professional working for the genetic testing company may have good knowledge of the tests, but could have an interest in having the consumer purchase as many tests as possible. This could place him in a conflict of interest with the consumer who could be best off by purchasing a more limited panel of tests tailored specifically for him.
Filed under: Ethics, Health Care Economics, Health Policy Community, Health Reform, Insurance Companies, Prescription Drugs, Research
One of the most robust “memes” in contemporary law is the power of disclosure. In health law, disclosure comes up again and again: patients need to give “informed” consent, insurers are supposed to explain their policies clearly, and conflicts of interest, when not proscribed, should at the very least be exposed. But there are growing challenges to the disclosure meme, both within health law and without.
George Lowenstein and Peter Ubel note some problems with disclosure approaches in this article on the weaknesses of behavioral economics generally:
It seems that every week a new book or major newspaper article appears showing that irrational decision-making helped cause the housing bubble or the rise in health care costs. Such insights draw on behavioral economics, an increasingly popular field that incorporates elements from psychology to explain why people make seemingly irrational decisions, at least according to traditional economic theory and its emphasis on rational choice. . . . But the field has its limits. As policymakers use it to devise programs, it’s becoming clear that behavioral economics is being asked to solve problems it wasn’t meant to address.
[T]ake conflicts of interest in medicine. Despite volumes of research showing that pharmaceutical industry gifts distort decisions by doctors, the medical establishment has not mustered the will to bar such thinly disguised bribes, and the health care reform act fails to outlaw them. Instead, much like food labeling, the act includes “sunshine” provisions that will simply make information about these gifts available to the public. We have shifted the burden from industry, which has the power to change the way it does business, to the relatively uninformed and powerless consumer.
The same pattern can be seen in health care reform itself. The act promises to achieve the admirable goal of insuring most Americans, yet it fails to address the more fundamental problem of health care costs. . . . [T]he act tries to lower costs by promoting incentive programs that reward healthy behaviors. . . . [But s]tudies show that preventive medicine, even when it works, rarely saves money.
At its worst, disclosure can become merely pro forma; as Kafka (via Trudo Lemmens) puts it, “Leopards break into the temple and drink to the dregs what is in the sacriﬁcial pitchers; this is repeated over and over again; ﬁnally it can be calculated in advance, and it becomes part of the ceremony.” Omri Ben-Shahar has argued that disclosure is one of many aspects of consumer protection law with little real impact on individual welfare. As Amelia Flood reports,
Ben-Shahar, who spent last summer studying all the mandated disclosure statutes in Illinois, Michigan and California, argues that consumer protection advocates have gotten it wrong when it comes to mandating information access for consumers. He says consumers get lost in a sea of technical language, unread disclaimers and long-shot lawsuits. . . . According to Ben-Shahar, disclosures are of more use to consumer ratings groups like Zagat and Consumer’s Digest than they are to most consumers.
So perhaps there is some hope here: third-party aggregators and raters might use disclosures as part of an overall effort to rate various hospitals or doctors. The question then becomes–who shall pay (and rate) the raters? One irony here is that doctor rating sites have themselves been accused of being insufficiently transparent about the ways in which they evaluate physicians. New York Attorney General Cuomo even pursued the matter. His office eventually settled with insurers who ran rating sites. They pledged to “fully disclose to consumers and physicians all aspects of their ranking system.”
What’s the lesson here? First, that consumers are, by and large, too busy to process piecemeal disclosures by professionals like physicians and other health care providers. Second, third party raters can fill some of this information gap by aggregating information. Third, this process of aggregation and rating itself will likely need to be closely supervised by a good-faith regulator, lest it fail to take into account the full range of interests (and quality of information) proper for the task.