I am a scientific skeptic. It means that I pursue published scientific evidence to support or refute a scientific or medical principle. I am not a cynic, often conflated with skepticism. I don’t have an opinion about these ideas. Scientific skepticism depends on the quality and quantity of evidence that supports a scientific idea. And examining the hierarchy of scientific evidence can be helpful in deciding what is good data and what is bad. What can be used to form a conclusion, and what is useless.
That’s how science is done. And I use the hierarchy of scientific evidence to weigh the quality along with the quantity of evidence in reaching a conclusion. I am generally offended by those who push pseudoscience – they generally try to find evidence that supports their predetermined beliefs. That’s not science, that’s the opposite of good science.
Unfortunately, today’s world of instant news, with memes and 140 character analyses flying across social media, can be overwhelming. Sometimes we create an internal false balance, assuming that headlines (often written to be clickbait) on one side are somehow equivalent to another side. So, we think there’s a scientific debate when there isn’t one.
I attempt to write detailed, thoughtful and nuanced articles about scientific ideas. I know they can be complex and long-winded, but I also know science is hard. It’s difficult. Sorry about that, but if it were so easy, everyone on the internet would be doing science. Unfortunately, there are too many people writing on the internet who think they are talking science, but they fail to differentiate between good and bad evidence.
But there is a way to make this easier. Not easy, just easier. This is my guide to amateur (and if I do a good job, professional) method to evaluating scientific research quality across the internet.