Anti-vaxxers love to use the Vaccine Adverse Events Reporting System (VAERS) to make claims about causation between vaccines and some (or all) adverse events. They have doubled down on dumpster diving into VAERS during the COVID-19 pandemic, producing outright falsehoods and misinformation about the COVID-19 vaccines.
But VAERS is not the way to determine causation. In fact, it’s not even a good way to determine correlation. At the very best, VAERS contains observational information that functions as a safety signal for the FDA and CDC, who have the resources and epidemiologists who can use other methods to determine if there is a correlation, and possible causation, between a vaccine and an adverse event.
It’s ironic that most of the so-called “VAERS data” used by the anti-vaccine activists are analyzed by amateurs, who have never taken an epidemiology or statistics course. Good research into vaccine adverse effects requires much better data than is found in VAERS.
Let’s take a look at VAERS, correlation, and causation. It’s much harder than you think.
What is VAERS?
The Vaccine Adverse Event Reporting System (VAERS) is one of the systems employed by the US Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA) to monitor vaccine safety. VAERS is a post-marketing surveillance program, collecting information about adverse events (including death) that occur after the administration of vaccines to ascertain whether the risk-benefit ratio is high enough to justify the continued use of any particular vaccine.
VAERS, the Vaccine Safety Datalink (VSD), and the Clinical Immunization Safety Assessment Network (CISA) are the major tools used by the CDC and FDA to monitor vaccine safety. These are powerful tools that actually provide full information about each patient so that correlation and causation may be determined through powerful case-control or cohort analyses of the data.
However, there are no analyses that can establish any type of causation between the vaccination and the claimed adverse event that is reported to the VAERS database. Frankly, it can be gamed by those with nefarious intentions, which can limit the value of the VAERS data.
To be honest, VAERS is a feel-good system for those who think that there’s a link between vaccines and something terrible, but without an active investigation, the data is just above the level of totally meaningless. Most epidemiologists know it is valueless as a database to determine correlation and/or causation. Even the VAERS system itself says that the data cannot be used to ascertain the difference between coincidence and true causality.
According to the CDC:
Established in 1990, the Vaccine Adverse Event Reporting System (VAERS) is a national early warning system to detect possible safety problems in U.S.-licensed vaccines. VAERS is co-managed by the Centers for Disease Control and Prevention (CDC) and the U.S. Food and Drug Administration (FDA). VAERS accepts and analyzes reports of adverse events (possible side effects) after a person has received a vaccination. Anyone can report an adverse event to VAERS. Healthcare professionals are required to report certain adverse events and vaccine manufacturers are required to report all adverse events that come to their attention.
VAERS is a passive reporting system, meaning it relies on individuals to send in reports of their experiences to CDC and FDA. VAERS is not designed to determine if a vaccine caused a health problem, but is especially useful for detecting unusual or unexpected patterns of adverse event reporting that might indicate a possible safety problem with a vaccine. This way, VAERS can provide CDC and FDA with valuable information that additional work and evaluation is necessary to further assess a possible safety concern.
The VAERS website adds the following information about the database:
VAERS accepts reports of adverse events and reactions that occur following vaccination. Healthcare providers, vaccine manufacturers, and the public can submit reports to the system. While very important in monitoring vaccine safety, VAERS reports alone cannot be used to determine if a vaccine caused or contributed to an adverse event or illness. The reports may contain information that is incomplete, inaccurate, coincidental, or unverifiable. In large part, reports to VAERS are voluntary, which means they are subject to biases. This creates specific limitations on how the data can be used scientifically. Data from VAERS reports should always be interpreted with these limitations in mind.
The strengths of VAERS are that it is national in scope and can quickly provide an early warning of a safety problem with a vaccine. As part of CDC and FDA’s multi-system approach to post-licensure vaccine safety monitoring, VAERS is designed to rapidly detect unusual or unexpected patterns of adverse events, also known as “safety signals.”
If a safety signal is found in VAERS, further studies can be done in safety systems such as the CDC’s Vaccine Safety Datalink (VSD) or the Clinical Immunization Safety Assessment (CISA) project. These systems do not have the same scientific limitations as VAERS, and can better assess health risks and possible connections between adverse events and a vaccine.
In essence, VAERS is nothing more than anecdotes that have limited value, so they cannot really show correlation or causation. But, it is not valueless. It can be used as a safety signal – if researchers observe a large number of reports for something, say myocarditis, then the scientific method should be employed to determine if the signal is something more than coincidence.
Here’s how real researchers use VAERS:
- Observation. Note the number of reports of a particular adverse event is observed after a particular vaccine.
- Hypothesis. Ask the question, “is the vaccine linked to a particular adverse event?”
- Test the hypothesis. Researchers could use the VSD, which contains full medical histories of patients who received and did not receive a particular vaccine. In other words, it’s real-world data that includes a “control group.” So, researchers could search for all cases of a particular adverse event in the database, and then they look at how many were vaccinated or unvaccinated. Or they could look at all patients in the database, split them into vaccinated and unvaccinated, and see if there’s a difference in the risk of a particular adverse event.
- Publish the results in a peer-reviewed journal. Then we can see the actual data and statistical analyses.
In fact, this has been done for myocarditis and COVID-19 vaccines, using VAERS data for observational safety signal, but then using VSD for a powerful, large epidemiological study.
Correlation and causation – Bradford Hill
I’ve written about correlation, causality, and plausibility before, but I’ve never felt that I made the case appropriately. So I started to investigate more about how we determine when a correlation is equivalent to causation, and I saw that some researchers use something called the Bradford Hill criteria.
English statistician Sir Austin Bradford Hill was interested in developing a set of objective criteria that could be used to provide epidemiological evidence of whether correlation equals causation. It serves as a sort of checklist for scientists who can take data that establishes correlation and then logically determine if that supports causality.
He used his criteria to establish that smoking was linked to lung cancer (and other diseases). He essentially went through each point of his criteria to show how smoking and cancer were linked.
The Bradford Hill criteria include the following aspects:
- Strength (effect size)– this is one of the important parts of this criteria – the larger the effect from the cause, the higher the probability of a causal link. This doesn’t mean small effects aren’t important, it’s just that fields like science-based medicine favor larger effects. For example, if we say drug A cures the common cold, but the course of the disease is only reduced by ½ day, then it’s hard to tell if it’s a result of randomness in data, bias in results, or actual clinical effect.
- Consistency (reproducibility) – proposed causality needs to be observed in more than just one location. Consistent data published by different researchers in different locations with different population samples strengthen the possibility that there is a link between cause and effect.
- Specificity – causation requires a very specific population with a very specific disease with no other possible explanations of that causation. Again, the more specific an association is between cause and effect, the larger the possibility of a causal link.
- Temporality – the proposed effect must occur after the cause, and within a likely time period for which a link between cause and effect.
- Biological gradient – there must be some sort of dose-response effect, that is, the higher the exposure to some cause should generally lead to a higher incidence of the effect. (There are cases where a lower exposure leads to a higher incidence, so we could observe the inverse effect.)
- Biological plausibility – as we will discuss next, there must be a biologically plausible mechanism between cause and effect. Of course, it is possible that we lack knowledge of all possible mechanisms, but inventing one out of thin air is not going to help the “cause.” Even then, the potential new mechanism must fall within the basic principles of biology, chemistry, and physics. Biological plausibility is probably the most important factor in this list.
- Coherence – does the proposed cause and effect fit with what we know about the possible adverse effect? If you want to claim that the HPV vaccine causes autonomic dysfunction, yet our knowledge of what causes it has nothing to do with the HPV vaccine, then it’s going to take a stack of evidence to establish why this might be.
- Experiment – does a group that lacks exposure to the effect exhibit a different outcome? For example, large case-control studies of vaccines examine the risk of a particular adverse event compared to a group that is unvaccinated. In this case with vaccines, it’s hard to establish correlation let alone causation.
Bradford Hill developed this checklist over 50 years ago, so you could assume that there has been some evolution to the list. Some people have added one or two items to the list, like examining confounding factors and experimental bias. Those are usually evaluated in the original epidemiological research that establishes correlation, but if not, they become an unofficial part of the checklist.
These criteria should be used as a checklist. The more points that you can check off, the closer you can come to support a claim that there is a causal link between cause and observed effect.
VAERS and causation — not the way to do it
So, it’s clear that showing causation between a vaccine and an adverse effect is much more than looking up the number of adverse events in the VAERS database and dividing that by the total number of vaccines given. That kind of analysis tells us nearly nothing. The data is suspect, there are no medical records to determine the validity of the adverse event. There’s nothing.
This is why I keep stating that doing real science is hard work. If it were as easy as plowing into the VAERS database, finding a number, and dividing it by another number, everyone would do it. Oh, wait, everyone thinks they can do it.
The proper way to determine if there is causality between an adverse event and a vaccine is to use the VSD database. But to use it, you must get approval so that you don’t abuse it, like contacting patients (which was done when an anti-vaxxer used the database). Once you have access to that database you can do cohort or case-control studies followed by powerful statistical analyses to determine if there is an increased risk of a particular adverse event after vaccination. But one needs a thorough education and experience to do this, not one of those two-hour Google University degrees.
No matter what you want to believe about any medical quackery, no matter how hard you want to convince yourself they are real, and no matter how much you want everyone to believe your anecdotes, finding a potential correlation then causation is very difficult. And it requires a logical process not a claim that it must be so because of anecdotes or belief.
There is a logical process that is required to get from correlation to causality. Those who attempt to shortcut that process to reach a pre-ordained conclusion mean that they have neither established correlation nor causality.
- Fedak KM, Bernal A, Capshaw ZA, Gross S. Applying the Bradford Hill criteria in the 21st century: how data integration has changed causal inference in molecular epidemiology. Emerg Themes Epidemiol. 2015 Sep 30;12:14. doi: 10.1186/s12982-015-0037-4. eCollection 2015. PubMed PMID: 26425136; PubMed Central PMCID: PMC4589117.