One of the central tenets of the pro-science world is that “correlation does not imply causation” – but it is misused and frequently abused by many writers. Frequently, those trying to push a position, like anti-vaccine, will immediately assume if we can show correlation, then we can automatically leap to causation. Thus, in their world, correlation implies causation. However, real science demands both evidence of correlation and separate line evidence of causation, in which case, we can conclude that correlation implies causation. It all depends on your quality of evidence.
Conflating causation and correlation is somewhat different than the logical fallacy of post hoc ergo propter hoc, where one thinks one event follows the first event because of the existence of the first event. I’m sure all good luck charms and superstitions, like walking under a ladder, are related to the post hoc fallacy. But, if I walk under a ladder, then trip on a black cat, and then crash into a mirror, I don’t immediately blame the initial act of walking under the ladder. I just assume I’m clumsy.
Correlation and causation are a very critical part of scientific research. Basically, correlation is the statistical relationship between two random sets of data. The closer the relationship, the higher the correlation. However, without further data, correlation may not imply causation, that the one set of data has some influence over the other.
Correlation implies causation – an example
Let’s invent a massive study to investigate car accidents after vaccinations. In our imaginary study, we find that the rate of automobile accidents with a child in the back seat after a child is vaccinated is higher than the background rate of automobile accidents with children in the back seat who aren’t vaccinated. Does the vaccination itself cause the higher rate of accidents? Well, I suppose you could make an argument that a post-vaccinated child is still screaming or something, distracting the driver, but that variable could happen with unvaccinated children just screaming because they didn’t get their GMO-free, organic, free-range ice cream cone.
But did the vaccine itself cause the accident? Or is it some other factor? Like the driver being stressed because of going to the pediatrician for the vaccine because she read all that misinformation from the anti-vaccine groupies? Or because her child is a bit fussy after vaccination, because that happens? In other words, we have data, but it really has no meaning without establishing a reasonable level of causality.
So when you read an article in one of the anti-vaccine websites that X number of girls died because of the HPV vaccine, or that because the rate of autism has increased while the number of vaccines has increased, the increased vaccination caused the increased rate of autism, immediately, one of us (you know, the pro-science skeptics) will proclaim correlation does not imply causation.
The problem with that proclamation is that it’s too simple. Like everything in science there is more to the understanding of relationship between correlation and causation than simply dismissing it. On the other hand, we need evidence of correlation and causation to make any claim either way about correlation or causation.
For example, the whole science behind vaccines shows strong correlation throughout the history of vaccines. We know that the smallpox vaccine eradicated smallpox, not because we had direct evidence of causality between the vaccine and the eradication of smallpox, it’s because we had overwhelming correlation along with other types of direct evidence that established causality. And it is this other evidence that is actually more powerful in establishing correlation and causation, or, alternatively, that the evidence of correlation has no relationship to causality.
When correlation implies causation?
So how do we know if correlation does not imply causation – alternatively, when do we know it does imply it? There are seven additional tests of the correlation data that could be used to determine if there is also causality.
- The data must be strong. If one observes correlation between X and Y, causation can only be established the increase in Y in response to X is statistically substantial. In the early research in the links between smoking and lung cancer, it was found that the risk of cancer was 5-10X higher in smokers, a clinically significant increase. If we’re looking at vaccinations, causation can only be shown if there is a substantial increase in a risk factor compared to the general population. The larger the increase in risk relative to the background (unvaccinated) population, the better your data supporting causation.
- The data must be consistent. If one shows data that could imply causation, it must be consistent across a number of studies with different populations (gender, ethnicity, income, age). Again, going back to the early research in smoking and lung cancer, the first two studies looking at the link were done separately in two different continents (and this being the 1940’s and 50’s, information sharing was limited at best) but showed nearly the same results.
- The data must be specific. The data must predict causality, very precisely. Again, back to lung cancer and smoking, the data showed that smoking was linked to one type of cancer, in the lungs, at the precise location where smoking enters the body. One cannot show causality with general data, simply because the data is too imprecise.
- The data must be temporal. To show causality, one needs to show association grows stronger over time. For example, the longer one smoked, the greater the increase in risk for lung cancer. Alternatively, if the risk of lung cancer reduces over time after cessation of smoking, that is also an indicator of causality.
- The data must possess a dose response effect. That is, one can show that as you consume or receive more of X, the specific Y response increases. Back to smoking – the more cigarettes smoked, the higher the risk of lung cancer. The temporal effect, mentioned above, and dose response effect are often interrelated. So, if we were to examine a particular specific adverse event to vaccines, then the rate of that specific risk must increase in a linear fashion with additional numbers of vaccines.
- The causal effect must be plausible. If we look at smoking again, the early researchers could show a plausible mechanistic link between an inhaled carcinogen and malignant change in the lung’s cells. This is an important facet of determining causality in biological systems. When one argues that GMO foods may cause cancer, how plausible is that? Is there some biologically plausible mechanism between the GMO food and one of 200 or so different cancers? To be factual, there are precious few environmental factors that cause cancers, and those that are known are not implausible. When someone says that vaccines cause autism, is it even plausible? Is there some mechanism between stimulating the immune system through immunization and causing autism? Plausibility doesn’t mean we take the easy way and just say, “well, just because we don’t know of a mechanism doesn’t mean there isn’t one.” Actually, no you can’t say that, it’s a logical fallacy. We know a lot about human physiology – it’s not a giant mystery wrapped around an enigma. Human physiology is complex and detailed, so we can envision what is plausible or what isn’t. And what doesn’t exist is a plausible mechanism between vaccination and autism.
- The data must be coherent. Other types of evidence, like experimental ones in other models, ought to support the causality. Going back to smoking, the tar from cigarettes was painted on the back of mice, which induced tumors. Moreover, there was other evidence being found in the 1950’s and 1960’s that smoking was associated with increases in cancers of the lip, throat, tongue and esophagus. So there are separate lines of evidence that point back to the same causal factor – cigarette smoking.
Thus, correlation by itself does not imply causation. But when one gathers other evidence, that requires separate studies and analysis, correlation becomes one of the fundamental and robust pieces of evidence that establish causality. So if someone says “correlation implies causation,” or, alternatively, “correlation does not imply causation,” your response should be that it depends on what other evidence either supports or dismisses causality. It’s all about the evidence.
Like I wrote previously, research isn’t easy. One just can’t state that they see an observed correlation, then immediately state that one causes the other. They can’t see an increase in the autism rate, along side an increase in the number of vaccinations, then state, after looking at those numbers for an hour, that vaccines cause autism. You can’t without further, more complex, data that supports the hypothesis.
Does one need each of those seven additional data points to show causality? Yes, if you want to make a bold assertion backed by robust evidence. Again, those who try to oversimplify the process are the ones with the agenda. Those who try to make it easy are the ones who a trying to find data that proves their dogma and beliefs, rather than trying to determine what the data actually states. The data should drive the conclusion, as opposed to taking the easy course–searching for data to establish a preconceived conclusion.
Research is hard work. And if a researcher, or some random person on the internet, wants to state that correlation implies causation, then they need to provide a lot more high quality evidence of each. It’s not easy, but it can be done.