Last updated on August 24th, 2019 at 04:38 pm
Scientific skepticism is the noble pursuit and accumulation of evidence, based on the scientific method, which is used to question and doubt claims and assertions. A scientific skeptic will hold the accumulation of evidence as fundamentally critical to the examining of claims. Moreover, a true skeptic does not accept all evidence as being equal in quality, but, in fact, will give more weight to evidence which is derived from the scientific method and less weight to poorly obtained and poorly scrutinized evidence.
In the world of real scientific skepticism, evidence published in a peer-reviewed, high impact factor journal far outweighs evidence taken from other sources. Peer review is the evaluation of a scientific work by one or more people of similar competence (usually in the same field) to the producers of the work. Mostly, the peer review is blinded, in that the reviewers generally don’t know the authors (although it may not be difficult to uncover, especially if the paper is in an esoteric field of science). Peer review constitutes a form of self-policing of science by qualified members of a profession within the field of research. It is through this system of criticism and review that makes many journals, and the articles published within, powerful pieces of evidence in science.
In addition to peer review, there are other ways to ascertain the quality of research in a particular journal. Articles in high quality journals are cited more often because high quality journals just attract the best scientific articles. Higher quality journals employ a more meticulous and exhaustive peer-review.
Although somewhat controversial, journals are ranked using a metric called “impact factor” that essentially expresses numerically how many times an average article in a particular journal is cited by other articles in an index of all other journals in the same general field. The impact factor could range from 0 (no one ever cites it) to some huge number, but the largest is in the 50-70 range. One of the highest impact factor journals is the Annual Review of Immunology, which is traditionally has an impact factor in the 50′s–this would indicate that an average article published in that journal is cited by other medical articles an average of 50 times (an outstanding number).
Anyone who publishes within the scientific community probably knows which journals are the highest rated within their own discipline and within general science. To get tenure in an academic institution, the quality of the journal in which one’s papers are published are weighted to determine the overall publication publishing success of an applicant. Even those of us who don’t publish scientific articles are well aware of what are the better journals. A good scientific skeptic, even one without expertise in the areas of more focused journals, such as the aforementioned Annual Review of Immunology, can quickly and easily determine the quality of the journal prior to analyzing the article itself.
Over the past few years, there have been a proliferation of open-access journals which are scholarly journals that are available online to the reader “without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.” Some of these publishers also have dead-tree versions of their open access scientific journals. The open access venue for disseminating scholarly work in the sciences does not imply lower standards for peer review. For example, one publisher of open access journals, BMC, contain many that are at the top of their disciplines in terms of quality. The relatively new PLOS ONE is growing in both quality of articles published and respect within the scientific field. Both have relatively high impact factors for many, if not most, of their individual journals.
The problem is that there is a continuum of journal quality from impressive all the way down to abysmal, almost to the point of fraudulent. John Bohannon, in the 4 October 2013 issue of Science, wrote a semi-humorous article about the insufficient and laughable level of peer review in some of these journals. Essentially, Bohannon invented a researcher, made him a researcher at an imaginary location, and created a fake paper with contrived data to submit to over 300 of these new and open-access journals:
…good news arrived in the inbox of Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara. It was the official letter of acceptance for a paper he had submitted 2 months earlier to the Journal of Natural Pharmaceuticals, describing the anticancer properties of a chemical that Cobange had extracted from a lichen.
In fact, it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.
I know because I wrote the paper. Ocorrafoo Cobange does not exist, nor does the Wassee Institute of Medicine. Over the past 10 months, I have submitted 304 versions of the wonder drug paper to open-access journals. More than half of the journals accepted the paper, failing to notice its fatal flaws. Beyond that headline result, the data from this sting operation reveal the contours of an emerging Wild West in academic publishing.
What Bohannon uncovered makes me wonder about the whole “industry” of open-access journals (outside of the established, high impact factor ones). The editorial team of the Journal of Natural Pharmaceuticals only requested minor and superficial changes to the paper before accepting it less than 2 months later.
Bohannon stated that he expected a “credible peer review” at the Journal of Natural Pharmaceuticals. The journal claims that it is “a peer reviewed journal aiming to communicate high quality research articles, short communications, and reviews in the field of natural products with desired pharmacological activities.” Apparently, the editors are pharmacology or pharmaceutical science professors at universities around the world. The journal is just one of 270 journals owned by Medknow, a Mumbai (India) company, and probably the largest open-access publisher. Their business model is based on getting paid to publish by the author. Medknow was acquired by Wolters Kluwer, a Netherlands based publisher of open access journals.
Worse yet, Bohannon discovered that acceptance of articles was the rule, not the exception. A similar bogus paper was accepted by open-access journals run by the big names of the science publishing industry, Sage and Elsevier. Furthermore, that same paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. Interestingly, one of the best open access journals, PLOS ONE, did question the fictitious authors about some ethical issues. And they eventually rejected the paper.
However, I would be concerned that the journal didn’t determine that the author was a fake. People who do online dating always Google each other as a preliminary background check, I would expect nothing less out of an open access journal.
Here’s the actual data for the “sting” that Bohannon and Science perpetrated on these journals:
- They sent out various versions of the paper to 304 journals.
- Of those, 157 of the journals accepted the paper.
- 98 rejected it.
- Of the remaining 49 journals, 29 appeared to have folded or their websites were abandoned and 20 stated that they were still reviewing it. These two groups were excluded from the analysis.
Of the 255 papers that underwent the full editing process, either to acceptance or rejection,
- 60% of the decisions happened without any sign of peer review. Of course, if the article was rejected, it just means that the journal’s editorial staff spotted the paper’s quality before sending it out for peer review. That’s a good thing. But the problem was that papers were accepted without a cursory level of peer review.
- Of the 106 journals that did perform a review, 70% were accepted for publication.
- Only 36 of the 304 submissions generated comments pointing out the wide variety of the fake paper’s scientific problems. Despite that, 16 of those papers were accepted, despite the bad reviews.
So what can we conclude from all of this information? That some open access journals are just awful. And it goes back to my point that, even though metrics like number of times cited or impact factor, may be imperfect, they can be indicative of a high (or low) quality journal. A lot of articles that are cherry picked, usually by the antivaccine or anti-GMO activists, are published in these low quality journals, because the better journals reject the quality of the data, the experiments, analysis or anything..
Not every article published in high quality journals is worthy of praise, and the peer-review process at the highest levels of journals is not perfect (and it can be slow, one of the reasons that the open-access journals have flourished). But we have clear evidence here that the standards of peer-review in many open access journals is disgraceful. They demand money for publishing, and obviously, their business model is to get as many papers published as possible thereby maximizing their profits, irrespective of whether that paper deserves to be published.
I always check the quality of journal for any article I use to make my case. If I’m writing about vaccines, I know that there are hundreds of articles that will provide evidence for any point I’m making, almost all published in highly respected journals. Even if one were published in a low quality journal, I know I can easily replace it with a better one.
The cherry-pickers in the anti-science world don’t understand this. They lust for anything that supports their point of view, notwithstanding the quality of that evidence. That’s why they are not really skeptics–they are pseudoskeptics following the illusion of pseudoscience, cherry picking articles that fit their beliefs. This is the opposite of real science, which demands that the evidence should lead to the conclusion.
Key citation:
- Bohannon J. Who’s Afraid of Peer Review? Science. 2013 Oct 4;342(6154):60-65. PubMed PMID: 24092725. Impact factor: 31.027.