Loading...
 
Toggle Health Problems and D

Why single medical articles should not be trusted – Nov 2010

Lies, Damned Lies, and Medical Science – Nov 2010

The Atlantic Nov 2010 a long and very interesting article
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.

These clips are just 10% of the very interesting article, followed by two recent abstracts, and three of his articles are attached


Clip – – It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science “never minds” are hardly secret.

Clip – – - Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.

Clip – – – And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

Clip – – -Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings.

Clip – – Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.

Clip – – He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted:
80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do
25 percent of supposedly gold-standard randomized trials, . . .

Clip – – . He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.

which is attached to this page


Clip – – - “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”

Clip – – But even for medicine’s most influential studies, the evidence sometimes remains surprisingly narrow. Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.

Clip – – But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
David H. Freedman is the author of Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them. He has been an Atlantic contributor since 1998.
– – – – – – – – – – – – – – – –

Science mapping analysis characterizes 235 biases in biomedical research.

J Clin Epidemiol. 2010 Nov;63(11):1205-15. Epub 2010 Apr 18.
Chavalarias D, Ioannidis JP.

Centre de Recherche en Epistémologie Appliquée, Ecole Polytechnique-CNRS, 32 Boulevard Victor, Paris, France.

OBJECTIVE: Many different types of bias have been described. Some biases may tend to coexist or be associated with specific research settings, fields, and types of studies. We aimed to map systematically the terminology of bias across biomedical research.

STUDY DESIGN AND SETTING: We used advanced text-mining and clustering techniques to evaluate 17,265,924 items from PubMed (1958-2008). We considered 235 bias terms and 103 other terms that appear commonly in articles dealing with bias.

RESULTS: Forty bias terms were used in the title or abstract of more than 100 articles each. Pseudo-inclusion clustering identified 252 clusters of terms. The clusters were organized into macroscopic maps that cover a continuum of research fields. The resulting maps highlight which types of biases tend to co-occur and may need to be considered together and what biases are commonly encountered and discussed in specific fields. Most of the common bias terms have had continuous use over time since their introduction, and some (in particular confounding, selection bias, response bias, and publication bias) show increased usage through time.

CONCLUSION: This systematic mapping offers a dynamic classification of biases in biomedical investigation and related fields and can offer insights for the multifaceted aspects of bias.PMID: 20400265

The use of older studies in meta-analyses of medical interventions: a survey.

Open Med. 2009 May 26;3(2):e62-8.
Patsopoulos NA, Ioannidis JP.

BACKGROUND: Evidence for medical interventions sometimes derives from data that are no longer up to date. These data can influence the outcomes of meta-analyses, yet do not always reflect current clinical practice. We examined the age of the data used in meta-analyses contained within systematic reviews of medical interventions, and investigated whether authors consider the age of these data in their interpretations.

METHODS: From Issue 4, 2005, of the Cochrane Database of Systematic Reviews we randomly selected 10% of systematic reviews containing at least 1 meta-analysis. From this sample we extracted 1 meta-analysis per primary outcome. We calculated the number of years between the study's publication and 2005 (the year that the systematic review was published), as well as the number of years between the study's publication and the year of the literature search conducted in the study. We assessed whether authors discussed the implications of including less recent data, and, for systematic reviews containing meta-analyses of studies published before 1996, we calculated whether excluding the findings of those studies changed the significance of the outcomes. We repeated these calculations and assessments for 22 systematic reviews containing meta-analyses published in 6 high-impact general medical journals in 2005.

RESULTS: For 157 meta-analyses (n = 1149 trials) published in 2005, the median year of the most recent literature search was 2003 (interquartile range IQR 2002-04). Two-thirds of these meta-analyses (103/157, 66%) involved no trials published in the preceding 5 years (2001-05). Forty-seven meta-analyses (30%) included no trials published in the preceding 10 years (1996-2005). In another 16 (10%), the statistical significance of the outcomes would have been different had the studies been limited to those published between 1996 and 2005, although in some cases this change in significance would have been due to loss of power. Only 12 (8%) of the meta-analyses discussed the potential implications of including older studies. Among the 22 meta-analyses considered in high-impact general medical journals, 2 included no studies published in the 5 years prior to the reference year (2005), and 18 included at least 1 study published before 1996. Only 4 meta-analyses discussed the implications of including older studies.

INTERPRETATION: In most systematic reviews containing meta-analyses of evidence for health care interventions, very recent studies are rare. Researchers who conduct systematic reviews with meta-analyses, and clinicians who read the outcomes of these studies, should be made aware of the potential implications of including less recent data. PMID: 19946395 full text on-line

Attached files

ID Name Comment Uploaded Size Downloads
236 Ioannidis JAMA.pdf admin 14 Oct, 2010 02:09 213.91 Kb 534
235 Ioannidis - why current publication practices may distort science - 2008.PDF admin 14 Oct, 2010 02:09 71.46 Kb 598
234 Ioannidis 50 year fate - 2010.pdf admin 14 Oct, 2010 02:08 89.33 Kb 423
See any problem with this page? Report it (FINALLY WORKS)