Last week I mentioned an odd term, p-value, which is commonly used in deciding whether your results are worth mentioning to your colleagues and the public. Of course it has a strict and narrow meaning, and of course that meaning is abused and misinterpreted in discussions about science.
Let's say you're performing an experiment: a pregnancy test. The box claims 95% accuracy, and if you read the fine print it's referring to a p-value of 5%. You take the test and...it's positive! So are you really pregnant, or not?
Unfortunately your urgent question hasn't yet been answered. A p-value compares the hypothesis you're testing ("I'm pregnant") to what's called a null hypothesis (in this case, "no baby"), and a p-value of 5% says that if you were not pregnant, there would only be a 5% chance that the test would return a positive result.
You might be tempted to flip this around and state that there's a 95% chance you're actually pregnant, but you would be committing an egregious statistical sin - and this is the same sin committed wittingly or unwittingly by science communicators and sometimes scientists themselves.
Here's the problem: what if you're male? The test can still say you might have a baby, because it's not answering the question "Am I pregnant?" but rather "If I'm not pregnant, what are the chances of the test returning a positive result?" It's a low number - 5% - but not zero. Thus males can still get a positive result despite never being pregnant.
The p-value by itself was only ever intended to be a "let's keep digging" guidepost, not a threshold for believability. To answer the question you actually want answered, you have to fold in prior knowledge. Combined with a low p-value, a healthy female of reproductive age can begin to conclude that there might be a baby on the way. A male...not so much. In either case, the p-value alone wasn't enough, and announcements based solely on that number need to be viewed suspiciously.