Most clinical trials are too small, often underpowered
The past decade has seen an explosion in the number of clinical trials; there are now more than 10,000 new trials registered each year. Although clinical trials' quality is improving somewhat, most are still small and single-center and a large proportion do not adhere with reporting requirements, raising serious questions as to what we are really accomplishing with all the effort and expense, authors in JAMA say.
ClinicalTrials.gov is the central Internet database of all clinical trials registered in the U.S. Its paper-based origins date back to the HIV/AIDS epidemic of the late 1980s, when activists called for more efficient mechanisms to support clinical trials in HIV treatment. ClinicalTrials.gov has expanded and gained greater authority over the years, especially after mandatory reporting requirements were created in 2007 by the FDA. One of the registry's potential benefits is now to help identify and track negative studies that would otherwise go unreported by their sponsors -- allowing physicians and policymakers to gain a more complete picture of the body of research surrounding a new drug or other therapy.
Because of its inclusiveness of almost all U.S. trials, ClinicalTrials.gov also provides the opportunity for analysis and evaluation of the current state of clinical research in the U.S., which is what Robert Califf and friends from Duke, NIH, and the FDA did for the May 2 JAMA.
What They Did
Authors analyzed a data set including 96,000 clinical trials registered on ClinicalTrials.gov between 2007-2010.
What They Found
The number of trials is rising dramatically: 29,000 registered between 2004-2007, and 41,000 registered between 2007 and 2010.
Most clinical trials are small and single-center: ~60% enrolled 100 or fewer subjects; 66% were single-center.
50% of those testing interventions (of any kind) enrolled fewer than 70 subjects.
Almost half were funded by non-industry, non-NIH sources (47%).
Industry was the lead sponsor in only ~35% of trials, but these were larger, accounting for ~60% of subjects enrolled in trials.
Quality is improving, with a decline in the number of missing data elements, in general.
What It Means
Smaller, single-center studies are in a word, inferior, and taken individually, each is highly unlikely to produce meaningful conclusions. So why are "we" (meaning you all) investing most of our time & effort on them?
Two theories, not mutually exclusive, from your friendly neighborhood gadfly:
Some are pilot studies, or hypothesis-generating, that could lead to larger, more conclusive trials in the future.
Academics' brains and creativity are constrained by the outdated, pre-Internet, feudal system of competing university fiefdoms, which discourages meaningful collaboration on a scale large enough to consistently, convincingly produce findings that might improve people's health or medical care. Organizing and conducting large multi-center studies also takes a very long time, not calibrated to the milestones set by promotion committees. Daunted by these factors, wanting to make an impact, and needing to publish regularly, academics take the logical approach of conducting and publishing small studies at the one center they have influence: their own. Since publications (and the funding they help engender) are the currency of the academic realm, the fact that little actual knowledge is being produced is beside the point, from the standpoint of "success" as defined within those institutions. The result is what Califf et al have discovered here.
In an accompanying editorial, Dickerson and Rennie lament the current state of affairs (of small, puny trials being the standard), and especially criticize those investigators not complying with the reporting requirements for ClinicalTrials.gov:
[M]ost trials registered with ClinicalTrials.gov are small (<100 participants), single-center trials, and almost half of them were initially designed to be larger ... [H]ow often were they stopped early and why? ... What proportion of these trials were underpowered to begin with; in other words, were participants enrolled in methodologically flawed and therefore unethical studies?
ClinicalTrials.gov is coming up short ... not enough information is being required and collected, and even when investigators are asked for information, it is not necessarily provided. As a consequence, users of trial registries do not know whether the information provided through ClinicalTrials.gov is valid or up-to-date.
Talk amongst yourselves.
Califf RM et al. Characteristics of Clinical Trials Registered in ClinicalTrials.gov, 2007-2010. JAMA 2012;307:1861-1864.