When you get down to it, science, particularly in the clinical realm, is something of a numbers game. An experiment or study’s weight depends greatly on its size (how many patients took part, how many times the experiment was repeated, etc.). For any number of reasons, though, researchers may only be able to bring a few people into a study and collect limited data, restricting both the answers it can provide and the impact of those answers on the field. Such has been the case with autism, for example, where studies tend to be small and patient populations haven’t always been well defined.
But what if one could compare apples to oranges – or, at least, Golden Delicious to Cortlands – by creating one large “uberstudy,” merging the results of many small studies in ways that would allow comparisons among them to generate some level of consensus about a treatment or discovery?
This is where meta-analyses come in, like the three recently published in Pediatrics on different treatments for autism spectrum disorders (ASDs). “Meta-analysis lets us synthesize research that’s already been done and make it stronger by allowing separate studies to reinforce each other,” says Al Ozonoff, the director of the Biostatistics Core in Children’s Clinical Research Program. “If you look across the spectrum of trials on a particular subject or treatment, you can sometimes see overlaps and opportunities to get the benefits and strengths of running large studies by putting many smaller studies together as if they were one big study. The big study itself never happens, but we can imagine as if it took place at all of the different sites where the original small studies occurred.”
However, it’s not as simple as collecting a bunch of studies together and pooling their results. “If you think about all the different parameters that go into a clinical trial, for instance, it’s virtually impossible to match two trials on every single aspect,” says Ozonoff. “The question becomes: Can we in any reasonable way combine the results from these studies, under what circumstances does it make sense to put them together, and when should we say, ‘No, we need to look at these separately?’”
When done properly, meta-analyses can be valuable tools for clinicians. “They help me to interpret research better by pointing out things like, ‘This study was randomized, this one was not, this one was big and this one was small,’ and to determine whether, on balance, a particular treatment is a good idea or not,” says Claire McCarthy, a primary care physician at Children’s and also the hospital’s medical communication editor. “They also introduce me to studies I hadn’t heard about, while putting them into context.”
“And very often what I find out from a meta-analysis is that no one knows the answer to a question,” she notes, “which can be valuable in and of itself.”
Some meta-analyses, like a 2008 one on artificial blood products, prove to be better than others, such as a 2007 meta-study on the safety of rosiglitazone (Avandia®, GlaxoSmithKline); the results and methodology of the latter were later called into question by a subsequent meta-analysis of the same data. “Researchers have to pay close attention to detail in grouping and presenting studies appropriately in their meta-analyses,” McCarthy cautions. “Some are truly just synthesizing and providing the data, while others can come off as having an agenda.”
Ozonoff feels particular care is needed when designing and reading meta-studies that touch on polarizing topics, such as the three ASDs analyses or a 2009 meta-study by the U.S. Preventative Services Task Force on mammograms.
“With such topics, some readers will always feel that the authors cherry picked the studies and question why some studies were included in the analysis and other weren’t,” he says. “It takes some judgment and expertise to decide how similar trials have to be in order to include them, and how to account for the differences when talking about your results and quantifying the uncertainty in those results.
“Which makes it important for the authors to make it clear why they structured their analysis the way they did,” he concludes. “In addition, both authors and readers need to remember that meta-analyses aren’t a panacea or a way to answer all of our unanswered questions. There is always uncertainty that needs to be acknowledged.”