Several weeks ago, I wrote an article on how to decipher the science (or pseudoscience) in popular news articles. It discusses how we should be critical, if not skeptical, of what is written in these articles to ascertain what is or is not factually scientific. We even need to determine the quality of science from the best to the weakest, so that we can determine the level of authority of the science before we pass it along to others. With the social media, like Facebook and Twitter, which provides us with data that may not exceed a few words, then it’s even more imperative that we separate the absurd (bananas kill cancer) from the merely misinterpreted (egg yolks are just as bad as smoking).
Wikipedia is one place which can either be an outstanding resource for science or medicine, or it can just a horrible mess with citations to pseudoscience purveyors. For example, Wikipedia’s article on Alzheimer’s disease is probably one of the best medical articles on the “encyclopedia”. It is laid out in a logical manner, with an excellent summary, a discussion of causes, pathophysiology, mechanisms, treatments, and other issues. It may not be at the level of a medical review meant for a medical student or researcher, but it would be a very good start for a scientifically inclined college researcher or someone who had a family who was afflicted with the disease.
Nearly everything in the article is supported by a recent peer-reviewed journal article. Furthermore, the article does its best to avoid primary sources (ones in which the authors directly participated in the research or documented their personal experiences) while utilizing secondary sources (which summarizes one or more primary or secondary sources, usually to provide an overview of the current understanding of a medical topic, to make recommendations, or to combine the results of several studies). The reason secondary sources are so valuable is that they combine the works of several authors (and presumably locations), eliminating biases of one laboratory or one study. As I’ve said many times, trust your secondary sources over just about anything, and, of course, Cochrane Reviews are nearly the best of the secondary sources.
So when you’re reading a Wikipedia or news article, and it’s making a medical claim, and it’s basing it on some primary research (or worse yet, primary research on animals or cell culture), then it’s a long way to making a conclusion about humans. A very long way.
But it’s also important when you’re reading Wikipedia or science articles, to make sure that you don’t get fooled by what may appear to be valid science, but really aren’t. Many journals, and a lot of research institutions, are sending out press releases when new articles are being published. No one should dispute that these institutions have the right to “market” a new advance in science, but they don’t qualify as a reliable source for information.
David Gorski at Science Based Medicine wrote about this problem with press releases in Related by coincidence only? University and medical journal press releases versus journal articles. Gorski made a couple of interesting observations:
Specifically, the results support the hypothesis that university press offices are prone to exaggeration, particularly with respect to animal studies and their relevance to human health and disease, although press releases about human studies exaggerated 18% of the time compared to 41% of the time for animal studies. Again, this seems to make intuitive sense, because in order to “sell” animal research results it is necessary to sell its relevance to human disease. Most lay people aren’t that interested in novel and fascinating biological findings in basic science that can’t be readily translated into humans; so it’s not surprising that university press offices might stretch a bit to draw relevance where there is little or none.
This is very important, something that I see over and over again. There is a tendency to over-dramatize results from animal studies, when only a small percentage of compounds that are tested in animals or cell cultures ever make it into human clinical trials (let alone are approved by the FDA). The National Cancer Institute has screened over 400,000 compounds for treating cancer, and maybe 20-30,000 have even made it to early clinical trials, and of those just a handful are used in modern medicine. You have to be extremely skeptical of reading an article that has source to a press release that might overstate the results (or even if they refer directly to such a primary study).
Gorski continues with a more important issue:
In human studies, the problem appears to be different. There’s another saying in medicine that statistical significance doesn’t necessarily mean that a finding will be clinically significant. In other words, we find small differences in treatment effect or associations between various biomarkers and various diseases that are statistically significant all the time. However, they are often too small to be clinically significant. Is, for example, an allele whose presence means a risk of a certain condition that is increased by 5% clinically significant? It might be if the risk in the population is less than 5%, but if the risk in the population is 50%, much less so. We ask this question all the time in oncology when considering whether or not a “positive” finding in a clinical trial of adjuvant chemotherapy is clinically relevant. For example, if chemotherapy increases the five year survival by 2% in a tumor that has a high likelihood of survival after surgery clinically relevant? Or is an elevated lab value that is associated with a 5% increase in the risk of a condition clinically relevant? Yes, it’s a bit of a value judgment, but small benefits that are statistically significant aren’t always clinically relevant.
Now, as Gorski states, this is a bit of guesswork and instinct, but none of us should probably accept results as meaningful if they are tiny, even if statistically significant. I see this a lot, especially with alternative medicine studies that try to “prove” that they have some benefit beyond placebo. But if there’s a new therapy, or “eat XYZ and it will reduce ABC by 5%”, those results may just be no different than random.
Just as bad as press releases are meeting abstracts (which, of course, are promoted by press releases by research institutions), which are incomplete and unpublished data and undergo varying levels of review; they are often unreviewed self-published sources and these initial conclusions may have changed dramatically if and when the data are finally ready for publication. According to a 2002 paper in JAMA, a total of 252 news stories reported on 147 research abstracts. In the 3 years after the meetings, 50% of the abstracts were published in high-impact journals, 25% in low-impact journals, and 25% remained unpublished. Interestingly, the 39 abstracts that received front-page coverage in newspapers had a publication rate almost identical to the overall publication rate. The authors concluded that, “abstracts at scientific meetings receive substantial attention in the high-profile media. A substantial number of the studies remain unpublished, precluding evaluation in the scientific community.”
It’s really not that hard to determine what’s good science and what is bad. Not all science is equal. If you want to peer into the future, sure an abstract or primary animal or cell culture study may tell you where clinical research is heading, but 10-20 years down the road. If you want to confirm that vaccines do not cause autism, it’s easy. There are secondary, peer-reviewed, high-impact journal published studies for your review.
- Abortion is NOT linked to breast cancer - 2023-06-06
- Are COVID vaccines related to neurologic events? - 2023-06-04
- Big supplement profits – boatloads of money with no oversight - 2023-06-04