Industry Funding of CME Down, According to ACCME

kathleen m. boozangWhen we think about health care reform, we need to remember that we have been attempting reforms through many avenues of myriad parts of our health system.  The IRS revised Form 990 and schedule H in anticipation of the ACA; critics of conflicts of interest have been working on multiple fronts simultaneously.  One of the challenges about all of these changes is how we measure whether they have made any positive difference.

The Accreditation Council for Continuing Medical Education (ACCME), which is the accrediting entity for Continuing Medical Education (CME), just released its annual report, which was much less interesting than I had hoped, as it is mostly a financial statement for the year’s CME activities.  Nonetheless, it shows that industry grants to CME have declined from 50 to 30 percent of total CME income, which is attributed to the tremendous scrutiny CME has received over the years.  Importantly, industry has not lost interest in medical conferences, as ad revenue from exhibits rose 7.2 percent to $296 million.  You can read a summary of the report on Pharmalot or read the whole report.

While interesting, it’s hard to make out exactly what to conclude from this news about CME funding.  First, it could be a response to the economy, but that’s belied by the increase in exhibit funding.  And besides, pharma has historically been of the “you have to spend money to make money” mind.  So, perhaps industry is indeed responding to the criticism about funding CME.  Not mentioned is the possibility that CME sponsors have been turning away industry money, but that too is a possibility.

Some were concerned that if this happened, health professionals would be unable or unwilling to pay for their own CEUs.  The total income for 2011 CME appears to be in line with prior years (especially given a methodology change adopted in 2011).  While the number of physicians participating in CME was down slightly, that decline is consistent with a multi-year downward trend; the number of non-physician participants in CME is slightly up.  Finally, as noted by Pharmalot, “other income, which includes registration fees paid by participants, rose 4.4 percent [in 2011] to nearly $1.2 billion.”  While more than one year of experience will give a better picture, it seems fair to conclude so far that physicians are indeed willing to underwrite their own CME.

Most important to remember, however, is that the funding issue was merely a surrogate for the question of whether CME is biased either substantively or in subject matter coverage.  I don’t think we really knew the answer yesterday, any more than the ACCME report enlightens us about what the answer is today.  An annual report about CME in which I and others would really be interested would look at whether the subject matter of conferences has changed — are things being covered that weren’t before.  Is comparative cost-effectiveness being addressed in presentations that address alternative treatments?  Are real responses to racial health disparities being discussed?  Is education being delivered to audiences comprised of interdisciplinary healthcare teams rather than the homogenous audiences found at many academy and similar meetings?  Is CME delivery itself being studied to determine what learning methodologies are most effective?  In short, if we can conclude that industry is listening to its critics by redirecting its funding, can we also infer that changes are occurring in response to other critiques of CME, such as those posed by the IOM report entitled “Redesigning Continuing Education in the Health Professions” and Seton Hall Law’s Whitepaper entitled “Drug and Device Promotion: Charting a Course for Reform?”

Presumably, it is a good thing to have less industry funding of CME — although we only see the change in the United States, not elsewhere . But it doesn’t get to the heart of the matter, which is the need for significant reform of CME generally.  That’s the report I want to read.

Share

The NIH’s Amended Conflict of Interest Regulations: A New, Weaker Approach to Intellectual Property Interests?

August 24, 2011 by · Leave a Comment
Filed under: Conflicts of Interest 

kate-greenwood-kg-2010-1-cropped-compYesterday, at long last, the National Institutes of Health released the final revisions to its regulations governing financial conflicts of interest on the part of applicants for federal research funds.  And there is good news.  The rule’s sunshine provisions have not, as was feared, been “gutted.”  Grant recipients will have to make their investigators’ financial conflicts of interest publicly accessible.  While an institution will not have to post the details of each conflict on its website, as was provided in the proposed regulations, if it does not it will instead have to provide the information in writing to anyone who asks for it.  Academics, advocates, federal and state prosecutors, other regulators, and members of the news media will have the access they need.  For sure, prospective patients or research participants will be less likely to come across information about investigator conflicts, but, as Kathleen Boozang explains here, it is far from clear that they would find such information helpful.

Of potentially more significance than the weakened sunshine provisions, the final regulations diverge from the proposed regulations with regard to the treatment of intellectual property.  Under the prior regulations, investigators were required to inform their institutions about relevant intellectual property rights, including copyrights, patents, and royalties in excess of $10,000.  The proposed regulations modified the definition to require disclosure of copyrights, patents, and royalties (and agreements to share in royalties) regardless of amount.  Under the final regulations, investigators do not need to tell their institutions about their intellectual property rights and interests unless and until they are in “receipt of income related to such rights and interests.”

The preamble to the final regulations is somewhat confusing.  For example, while the final regulations define significant financial interest to exclude intellectual property rights and interests that do not produce income, the agency states in the preamble that it “would expect institutional policies to require disclosure upon the filing of a patent application or the receipt of income related to the intellectual property interest, whichever is earlier.”  The preamble also contradicts itself with regard to the applicability of the rule’s $5,000 threshold, stating at one point that the threshold “applies to licensed intellectual property rights (e.g., patents, copyrights), royalties from such rights, and agreements to share in royalties related to licensed intellectual property rights,” while explaining (correctly, I think) at another point that “the $5,000 threshold would apply to equity interests and ‘payment for services,’ which would include salary but not royalties.”

The NIH’s explanation of its addition of the “receipt of income related to such rights and interests” qualifier to the definition of a significant intellectual property right or interest is especially confusing.  The agency writes that its intent was to exclude from the definition

“the rare cases when unlicensed intellectual property is held by the Investigator instead of flowing through the Institution,” because “it is difficult to determine the value of such interests.”  The agency’s point about valuation may be true, but that is an argument in favor of disclosure not against it.  With regard to equity interests, the final regulation requires investigators to disclose any equity interest in a non-publicly traded entity; the Food and Drug Administration similarly requires disclosure of equity interests “whose value cannot be readily determined through reference to public prices[.]”  The FDA also requires disclosure of any “[p]roprietary interest in the tested product,” without regard to value.

When an investigator has a proprietary interest in a product under study the potential exists for a serious conflict regardless of the interest’s current value or whether it is currently income-generating.  Seton Hall Law’s Center for Health & Pharmaceutical Law & Policy and others have recommended a near-total ban on serving as an investigator in that case.  Such a ban cannot, of course, be implemented unless investigators are required to tell their institutions about their proprietary interests.

Share

…After the Horse Has Already Left the Barn: FDA Continues to Postpone Conflicts Review Until Studies Are Complete

Photo by Skedonk via Flickr

Photo by Skedonk via Flickr

On Tuesday, the Food and Drug Administration released a draft guidance on financial disclosure by clinical investigators, targeted at the investigators themselves, at drug and device companies and others who sponsor clinical trials, and at the agency staff who review the disclosures.  In the draft guidance, which updates an earlier one, the FDA briefly reviews the financial disclosure regulations, which have not changed, and then provides heavily revised and expanded answers to frequently asked questions.

The draft guidance is a response to a January 2009 report by the Department of Health and Human Services’ Office of the Inspector General (OIG) which recommended that the FDA (1) “ensure that sponsors submit complete financial information for all clinical investigators[,]” (2) “ensure that reviewers consistently review financial information and take action in response to disclosed financial interests[,]” and (3) “require that sponsors submit financial information for clinical investigators as part of the pretrial application process.”  The draft guidance addresses the first two recommendations but, unfortunately, FDA has still not taken action on the third.

The draft guidance responds to the OIG’s first recommendation in a number of ways, including in its response to the question “What does the FDA mean by due diligence?” which has grown from three sentences in the earlier guidance to four paragraphs in this one.  The draft guidance sets forth in detail what those applying for marketing approval must do to obtain financial information from every investigator who worked on every clinical trial submitted in support of the application.  For example, when an applicant is missing an investigator’s financial information because it cannot find him or her, it must try to locate the investigator by making at least two phone calls, sending at least two certified letters, and requesting new contact information from the investigator’s previous institutions.  From there, the search might progress to contacting professional associations and conducting internet searches.  The draft guidance’s recommendations, if followed, should drastically reduce the number of applications that rely on the due diligence exemption to excuse missing financial information.

With regard to the OIG’s second recommendation, the draft guidance adds and answers the following question: “What will FDA’s reviewers consider when evaluating the financial disclosure information?”  In its answer, the FDA explains that “outcome payments (that is, payment that is dependent on the outcome of the study) elicit the highest concern, followed by proprietary interests (such as patents, royalties, etc.); but these are rarely seen.”  More typical are equity interests and significant payments of other sorts, in which case the agency takes into consideration the amount and nature of the payment as well as other factors such as the total number of investigators and subjects in the study, whether and how the study is blinded, controlled, and randomized, and whether the study endpoints were objective or subjective.  While the agency elsewhere rejects the idea that the financial disclosure requirements be waived for “efficacy studies that include large numbers of investigators and multiple sites[,]” it would appear to agree that the likelihood of a single investigator biasing such a study’s results is low.

The FDA has not taken action on the OIG’s third recommendation, that investigators’ financial information be submitted to the agency as part of the investigational new drug applications (IND) and investigational device exemptions (IDE) applications that are filed before studies in humans are initiated.  The draft guidance does exhort sponsors to consult the FDA early and often to minimize potential bias.  The draft guidance explains that “[b]y collecting the information prior to the study start, the sponsor will be aware of any potential problems, can consult with the agency early on, and can take steps to minimize any possibility for bias.”

When sponsors do choose “to consult the FDA early”, the draft guidance provides that agency staff should “focus on the protection of research subjects and the minimization of bias from all sources.”  The suggestion that agency staff play a role in protecting research subjects is interesting.  It is not mentioned in the regulations or anywhere else in the draft guidance and it is only possible where sponsors voluntarily seek the FDA’s input.  By the time an applicant is required to turn over investigators’ financial information, as part of an application for marketing approval, the horse has left the barn.  The clinical trials are complete and it is too late to protect participant’s rights and interests.  Bias, by contrast, can sometimes be addressed retroactively.  The draft guidance notes that the FDA’s “[r]eviewers might … compare results from more than one investigator, re-analyze the data excluding the investigator’s results, analyz[e] the data in multiple ways, and/or determin[e] if results can be replicated over multiple studies.”  Even bias is better dealt with prospectively, though, not least because agency staff are aware of and sensitive to the expense associated with conducting clinical trials and are likely to be highly reluctant to disregard a trial’s results.

Because prospective review of investigators’ financial information would allow the FDA to “focus on the protection of research subjects and the minimization of bias” across the universe of studies, not just those in which the sponsor chooses “to consult the FDA early,” the financial disclosure regulations should be revised per the OIG’s recommendations to require that financial information be submitted as part of the pretrial application process.

Comments on the draft guidance are due by July 25, 2011.

Share

The Limits of Disclosure as a Response to Finanacial Conflicts of Interest in Clinical Research

February 10, 2011 by · Leave a Comment
Filed under: Conflicts of Interest 

boozang123Seton Hall University School of Law’s Center for Health & Pharmaceutical Law & Policy has issued a White Paper, “The Limits of Disclosure as a Response to Financial Conflicts of Interest in Clinical Research,” in which the Center agrees that public policy should encourage researchers and institutions to make information about their financial relationships with industry available to the public, but-contrary to many other commentators’ recommendations- concludes that disclosure of financial information should not routinely be required as part of the informed consent process.

While reiterating the Center’s prior recommendations for direct measures to eliminate, reduce, and manage problematic financial relationships in clinical research, the Center notes that, despite “the importance of transparency as an ethical value, incorporating financial issues into the informed consent process would provide few, if any benefits to research subjects and could in fact cause significant harms.”

The Center notes the problem of “information overload,” as clinical research informed consent documents have already become “long and complex, thereby confusing and overwhelming potential research participants,” and evidence indicates that “participants are often unable to sift through the morass of information to tease out the content they find salient or material.” In addition, qualitative studies have shown that “brief concise statements about financial interest within informed consent documents were rarely understood, and sometimes only served to confuse potential participants.

The Center concludes that, if a conflict of interest is so serious that its disclosure would lead a reasonable person to refuse to participate in a study, the proper remedy is to eliminate the conflict. It is therefore essential to ensure that information about financial interests is made available to institutional review boards (IRBs) and conflicts of interest committees, so that they can ensure that any problematic conflicts are eliminated before a study begins.

The Center notes that its conclusion that financial conflicts of interest should not be routinely disclosed as part of the informed consent process is not inconsistent with the California Supreme Court’s decision in Moore v. Regents of the University of California.

While Moore creates the possibility that, in the right set of circumstances, a physician’s failure to disclose research-related financial interests could give rise to liability, it does not mean that any and all financial relationships with industry must necessarily be disclosed. Rather, as in any informed consent claim, liability would depend on the plaintiff’s ability to establish the element of causation–i.e., that, if the omitted information had been disclosed, a reasonable person in the plaintiff’s position would not have consented to the procedure. As explained above, under the Center’s proposed framework, any conflict serious enough to affect a reasonable person’s decision about enrollment would already have been eliminated before the research began.

“The Limits of Disclosure as a Response to Financial Conflicts of Interest in Clinical Research” may be found at http://law.shu.edu/HealthLawPublications.

Seton Hall Law School’s Center for Health & Pharmaceutical Law & Policy is a think tank that fosters dialogue, scholarship, and policy solutions to critical issues in health and pharmaceutical law. As part of its mission, it convenes policymakers, consumer advocates, the medical profession, industry, and government in the search for concrete solutions to the ethical, legal, and social questions presented in the health and pharmaceutical arenas. The Center also runs a compliance training program covering the state and federal laws governing the development and marketing of drugs and medical devices.

Share

Human Farming & the Limits of Medical Research

white-coat-black-hatA Museum of Modern Art exhibit by Michael Burton once proposed that human beings themselves would be the soil for a “future farm:”

Future Farm predicts that the emerging pharmaceutical research in harvesting adult stem cells from fat tissues and its convergence with future nanotechnologies, will bring with it scenarios that reconsider the body as income. We live in a world where industries exist to offer financial rewards for those willing to sell a kidney or produce hair to beautify others. Industries have grown to facilitate transplant tourism as a result of the success of contemporary surgery. And scientific and technological advances continue to bring new possibilities for the practice of farming the body.

This may seem like an overly dramatic or even science-fictionalized description of desperation due to poverty and larger economic trends. But the global economic race to the bottom has now so influenced medical research that Burton’s dark vision is coming closer to realization.

A recent article by Bartlett & Steele and a book by Carl Elliott describe the rise of “contract research organizations” that organize the initial phases of drug trials. Bartlett and Steele choose a provocative metaphor to describe the trend:

To have an effective regulatory system you need a clear chain of command—you need to know who is responsible to whom, all the way up and down the line. There is no effective chain of command in modern American drug testing. Around the time that drugmakers began shifting clinical trials abroad, in the 1990s, they also began to contract out all phases of development and testing, putting them in the hands of for-profit companies.

It used to be that clinical trials were done mostly by academic researchers in universities and teaching hospitals, a system that, however imperfect, generally entailed certain minimum standards. The free market has changed all that. Today it is mainly independent contractors who recruit potential patients both in the U.S. and—increasingly—overseas. They devise the rules for the clinical trials, conduct the trials themselves, prepare reports on the results, ghostwrite technical articles for medical journals, and create promotional campaigns. The people doing the work on the front lines are not independent scientists. They are wage-earning technicians who are paid to gather a certain number of human beings; sometimes sequester and feed them; administer certain chemical inputs; and collect samples of urine and blood at regular intervals. The work looks like agribusiness, not research.

Because of the deference shown to drug companies by the F.D.A.—and also by Congress, which has failed to impose any meaningful regulation—there is no mandatory public record of the results of drug trials conducted in foreign countries. Nor is there any mandatory public oversight of ongoing trials.

Therefore, it is up to journalists like Bartlett & Steele to uncover problems. And they are legion:

The Argentinean province of Santiago del Estero, with a population of nearly a million, is one of the country’s poorest. In 2008 seven babies participating in drug testing in the province suffered what the U.S. clinical-trials community refers to as “an adverse event”: they died. . . . In New Delhi, 49 babies died at the All India Institute of Medical Sciences while taking part in clinical trials over a 30-month period. . . . In 2007, residents of a homeless shelter in Grudziadz, Poland, received as little as $2 to take part in a flu-vaccine experiment. The subjects thought they were getting a regular flu shot. They were not. At least 20 of them died.

Bartlett and Steele also discuss problems in research in the US. Exploitation probably should not be a surprise in a country where unpaid prison labor appears to be a strategy to boost productivity. US companies are also driving the “initial stages of distributed human computing that can be directed at mental tasks the way that surplus remote server rackspace or Web hosting can be purchased to accommodate sudden spikes in Internet traffic.” (Such “human intelligence tasks” can be purchased for as little as a penny each on Amazon’s Mechanical Turk.) But the slow infiltration of less developed countries’ standards into US drug testing should be a concern for the FDA.

The system also appears to give drug companies a wide latitude to manipulate results, leading to the rise of “rescue countries” that are particularly prone to produce positive results:

One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. . . In 2004—on April Fools’ Day, as it happens—the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey. The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data.

Massive global inequalities render populations around the world vulnerable to exploitative testing conditions.

Carl Elliott’s book White Coat, Black Hat covers similar terrain, as well as the conflicts of interest and other issues we’ve addressed at Seton Hall’s health law center. His review of recent books on medical research described a “mild torture economy.” His piece “Guinea Pigging” suggests that “rescue counties” in the US may complement the “rescue countries” of Bartlett and Steele:

This unit was in a university hospital, not a corporate lab, and the staff had a casual attitude toward regulations and procedures. “The Animal House of research units” is what [one research subject] calls it. . . . Although study guidelines called for stringent dietary restrictions, the subjects got so hungry that one of them picked the lock on the food closet. “We got giant boxes of cookies and ran into the lounge and put them in the couch,” Rockwell says. “This one guy was putting them in the ceiling tiles.” Rockwell has little confidence in the data that the study produced. “The most integral part of the study was the diet restriction,” he says, “and we were just gorging ourselves at 2 A.M. on Cheez Doodles.”

Elliott’s litany of poorly controlled or ramshackle studies gives us one more item to add to Dr. John Ioannidis’s many reasons for doubting medical research:

Ioannidis [has] laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. . . .

When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X . . . But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.

For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. . . .[S]tudies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe. . . .

And these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge). . . .If a study somehow avoids every one of these problems and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.

To summarize: Ioannidis casts some doubt on even the best of studies, and Elliott, Bartlett, and Steele show that bad studies may be far more common than we suspect. It’s a troubling set of observations for all concerned. We should at the very least insist on much more systematic monitoring of global drug trials.

Share

Next Page »