One published paper never a scientific truth makes [Or How to Stop Being a Science Fanboy.]

Mike Rightmire
5 min readMar 8, 2022

--

I get asked many, many scientific questions on Quora. Many of these questions come from legitimate scientific enthusiasts trying to understand what they’re reading. Others come from science deniers, trying to ply their particular brand of personal issues. One of the biggest challenges, for both groups, is grasping the meaning of a scientific paper. So — let’s take a look at that.

The biggest challenge for non-professionals seeking information directly from scientific studies is not only in understanding the specifics of the studies, but in interpreting the value of the study itself. Here, I want to offer some advice about understanding studies in general, and determining which studies have value.

TL;DR Remember, science is ALWAYS done by consensus, by its very definition. Meaning, a scientific fact is not a scientific fact until a majority of the scientific community has been convinced. If you can’t convince your professional peers…it’s probably nonsense.

1. First, remember that a single study never (EVER) a scientific fact makes. Many people have the misunderstanding that if a study is published, well — then it’s a fact. But of course, this isn’t true.

Science is done by consensus. Part of this consensus is that any scientific hypothesis or even observation is not “an established fact” until the study has been both repeated by others, as well as the information used in the field (I.e. in a clinical environment, or in other studies which depend on the first study being correct.)

Think of it this way — scientific facts are not “yes it is” versus “no it isn’t. ” Scientific facts live on a scale from “really unlikely” to “really strongly likely.”

When a first, single study is peer reviewed and published — the information in it becomes an “interesting possibility” at best. Then after a few more studies agree with it, it becomes a “Strong probably.” After a few more years, and studies, and observation in the field — it then becomes a, “Very likely to be true.”

2. Not all studies are made equally.

Again, many people hold the misunderstanding that if a study is “peer reviewed” and published, well then it “must be a good study.” Of course, this is never true anywhere in the real world. It doesn’t matter if we’re discussing scientific studies, certificated plumbers, or licensed drivers — as we all know, the quality of anything “certified” ranges from genius to barely competent.

3. Understanding what a paper actually says requires an in-depth, intimate knowledge of the subject — even amongst scientists. Sometimes gauging whether the paper’s conclusions are valid depends on understanding details that would only be obvious to someone working in the field — and even to them may not be that obvious. I don’t mean a knowledge like, “What is affinity between molecules.” I mean things like, “What is the effect of Pseudouridine on mRNA stability and how will this affect the disassociation constant?

4. One also has to recognize the difference between “statistical significance” and “clinical significance.” Simply put…

Statistical significance is; what are the odds this <observed event> happened by random chance, versus actually because of the hypothesis being tested? This is usually measured by a P-factor, or a confidence interval. For example, a P-factor of 0.05 means there’s a 5% chance that the observation was just random chance.

Clinical significance means; does it matter? As an example (from the link above) a trial to determine if a drug for advanced pancreatic cancer increased life expectance. The tests shows it did increase life expectancy (P = 0.038, meaning there is only a 3.8% chance that this observed increase occurred by chance.) However, the life expectancy increase was only 10 days more out of 6 months. While the drug test was statistically significant (increased life expectancy was not random chance) the increase was so small as to be practically (clinically) meaningless.

Point being, many, many people (including science bloggers) don’t understand this difference. Which is why many people assume that if it’s “statistically significant” — it’s important. This is not a safe assumption.

5. Not all journals are created equal. This may be the most important factor…especially for the non-professional who doesn’t work and live in an environment where they enjoy the subtle eye-rolls when “Journal X” is mentioned. Simply stated, some journals are highly respected. Others, not so much.

If you want to gauge the quality of a journal, here’s two good sources.

First, check as to whether the journal is on the “Predatory journal” list. There’s many, many journals out there which will publish anything for cash.

Second, if you don’t want to trust the list — there something called Journal Impact (see also What is Journal Impact Factor?) This is a measure of how often articles from the journal are cited by professionals. Basically, it is a proxy for how reliable and valuable the industry worldwide feels the articles in the journal are.

For example, “Nature” — one of the most respected journals in the world — has a 5 year impact of 54. “Current Issues in Molecular Biology” has a 5 year impact of 2.5.

Impact Factor: 2.081 (2020) ; 5-Year Impact Factor: 2.561 (2020)

Certainly, there is industry manipulation here. Journal editors can see the value of artificially driving up this impact by (for example) selecting specifically controversial papers which will be cited a lot. However, and without any real data to back this up (shame on me) one can argue that if all the journals are doing it — then the effect (at least when comparing journal A to journal B) can be washed out. I.e. even if a journal’s impact is 50% artificially increased, since they are all artificially increased, then an impact of 1 versus 10 still carries a lot of meaning.

Impact factor — benefits and limitations — Grzybowski — 2015 — Acta Ophthalmologica — Wiley Online Library

--

--

Mike Rightmire
Mike Rightmire

Written by Mike Rightmire

Computational and molecular biologist. Observative speculator. Generally pointless non-stop thinker.

No responses yet