Yes, you read that right. This so-called shill for the pharmaceutical industry is calling them as I see them – most published medical research is a failure – but stay tuned for the full story.
Now by failure I mean that more often than not, claims that are supported by one or two published articles, rarely lead to a clinically significant product (such as a pharmaceutical). Of course, I don’t mean that the research is fraudulent, although some are, especially in low level journals frequented by pseudoscience pushers.
And I don’t mean it’s bad science, although there’s evidence of that, which I’ll discuss below.
And I don’t mean that there’s some grand conspiracy between Big Pharma and everyone else (again, no evidence to support that nonsense), although there is some evidence that research sponsored by Big Pharma is poorly done.
So what do I mean? Results from lot of medical research that get splashed in the news rarely, and I mean rarely, end up having any clinical utility. Rarely, but not never.
This does not mean that medical procedures, pharmaceuticals and devices that have been vetted through lots of research that repeat and confirm the original data and that form the basis of a scientific consensus are bad research. Almost everything that passes by the FDA and other regulatory agencies in other countries meets high standards for risk and benefit analysis.
Finally, arriving at a scientific consensus is a brutal, time-consuming process. It means that the theory or idea has been repeated many times, and the analysis and data are solid. So even though “most” research ends up in a failure, that’s because science is harsh to research that can’t be repeated, or was badly designed.
The best research isn’t a failure, even if it finds negative results. And the best ideas in medicine, let’s say vaccines, have been so thoroughly vetted that the consensus is nearly unassailable. Though people try with their poorly designed, unrepeatable research.
Medical research to clinical applicability
In 2003, researchers reviewed 101 studies published in respected, high impact factor scientific and medical journals between 1979 and 1983 that claimed a new therapy or medical technology was clinically promising. What was shocking was that they found only five, of those 101, made it to market within a decade of publication.
Why did they choose a decade? That’s a good period of time for further research, clinical trials, and finally FDA approval.
So that’s just 5%. But what’s even more ironic is that only one, ACE inhibitors which are used to treat hypertension, was still used at the time of publication (2003). Of course, it’s still used today as one of the basic tools in treating high blood pressure.
Think about that. There were 101 breathless claims of some miraculous medical breakthrough (if the PR departments of the various universities and startup companies did their jobs), and only one is on the market today.
I often make a pointed joke about research into cancer. “We have cured cancer in mice over the past decade.” Setting aside the fact that there will never be “one cure to cure them all,” given that there are over 200 cancers, each so different that it would be impossible to imagine one drug.
But when you read an article that eating blueberries will prevent breast cancer based on one mouse study, you need to consider it very skeptically. The gold standard of medical research is a double blind clinical trial, repeated several times, and then rolled up into a systematic review–the basis for a scientific consensus. At that point, if it showed significant clinical differences, I’ll be eating blueberries 24/7.
The marijuana activists love to tell you that cannabis can treat cancer. Yet, the only evidence of its ability to treat cancer is in one poorly designed clinical trial for one specific cancer, and a few animal studies. Of course, the level of the active ingredient that might attack some cancers would require a blood level of THC that would kill the smoker. Well, one couldn’t smoke the hundreds of joints necessary to kill this cancer–a rate that would have to be sustained every day until the cancer was “cured.”
Matthew Herper, writing for Forbes, talked with one pharmaceutical industry consultant that there were over 200 failures of cancer treatment drugs over the past few years. Failure in cancer treatment is much more common than success. And this is why accepting the hype of a treatment that is successful in mice and rats, or in one or just a handful of real patients, is going to lead you down the wrong path of cancer treatment.
There are so many drugs that appear to work with one person, but that could be just random. Sometimes people get better from cancers because of unknown reasons (and don’t take that to mean that obviously it’s blueberries, it’s because sometimes the body gets it right, for reasons we will understand eventually). So if you base your hype on ONE study with ONE patient, you will be mislead.
But as Herper’s article says,
This is truly a wonderful technological advance in treating cancers. But, we’ve heard this before. Remember, there were 101 medical advances that were hyped just like this, and only one ended up having any clinical relevance today.
One more point. Over 90% of cancer drugs that enter Phase 1 trials never get FDA approval to be marketed (pdf). It more or less debunks that myth that the FDA is in the pocket of Big Pharma. Or someone needs to up the payments or something, because a 10% success rate probably indicates that the FDA is pretty tough on these applications
I’ve dealt with this subject in much more detail. The most important thing to note is that the further up the hierarchy of scientific evidence a claim sits, the more likely the evidence will support (or refute) a clinical claim.
At the bottom of the hierarchy are animal or cell culture studies. At the top, are systematic reviews or, even better, meta analyses. These large studies roll up data from dozens or hundreds of other studies to determine what all of the research says.
Too often university public relations departments or alternative “medicine” websites will tout an animal study. But it may be 10-20 years before that study will be shown to have any clinical usefulness. If you’re trying to convince someone that vitamin D reverses diabetic neuropathy, then you need to bring clinical trials and meta analyses. And right now, you don’t have that.
There’s that old saying that was popularized by Carl Sagan–”extraordinary claims demand extraordinary evidence.” Extraordinary evidence could be meta analyses, and until that is made available, then an extraordinary claim is just a claim. Or a belief. Or just a myth.
What does this all mean, the TL;DR version?
- Be skeptical of any medical research until it exceeds the standard of double blind clinical trials, is repeated by other researchers, and a meta analyses of all the results supports the hypothesis.
- If the whole basis of your medical claim is a poorly designed study, includes mice, or is published in a bad journal, it little validity in making a claim about it’s clinical utility.
- The FDA and Big Pharma are not in any kind of conspiracy given that most drugs fail to get approved. This is one of the most laughable conspiracies ever.
- Cherry picking research to support your a priori beliefs? That’s really not going to work.
Editor’s note: This article was originally published in March 2015. It has been revised and updated to include more comprehensive information, to improve readability and to add current research.
- Contopoulos-Ioannidis DG, Ntzani E, Ioannidis JP. Translation of highly promising basic science research into clinical applications. Am J Med. 2003 Apr 15;114(6):477-84. PubMed PMID: 12731504.
- Kanaya N, Adams L, Takasaki A, Chen S. Whole blueberry powder inhibits metastasis of triple negative breast cancer in a xenograft mouse model through modulation of inflammatory cytokines. Nutr Cancer. 2014;66(2):242-8. doi: 10.1080/01635581.2014.863366. Epub 2013 Dec 23. PubMed PMID: 24364759.
- Schwartz LM, Woloshin S, Baczek L. Media coverage of scientific meetings: too much, too soon? JAMA. 2002 Jun 5;287(21):2859-63. PubMed PMID: 12038934. Impact factor=30.000