If you come across reports of a new advance in science, your first task is to determine if it's based on theory or experiment. If it's theory - say, some interesting computational result or an ingenious extension of known mathematical models - then you can rest easy, knowing that it's probably wrong because that's just the way the game of science is played. If we knew the correct answer ahead of time we wouldn't call it research.
Likewise, if the news story is based purely on observations or measurement, and the same team that did the work is providing an explanation for their own results, then you can rest easy too. A naked observation without any context is just that - a statement about some random thing that nature decided to do today. Each lone observation could have a range of interpretations from mundane to game-changing, and folks naturally lean towards the more interesting possibility, because that's exciting and fun and points to Nobel-prize-land. It also almost always ends up being less than revolutionary.
The most interesting stories are when theory connects to observations, when there's a strong attempt to refute or bolster some piece of (un)known science. And here the name of the game is error bars. In this game, what you know (the raw value you get) is much less important than how well you know it (the estimate of your uncertainty). It's here that you'll see quotes like "4.1 sigma detection" or "0.005% chance this was a coincidence".
Those statements are nice, and also almost always wrong. It takes multiple independent teams replicating the same result, using their own unique blend of methodology, analysis, and error estimation, before a result is generally accepted. This is an achingly slow and fastidious process, but absolutely crucial to ensuring that advances to understanding are actually advances.
In short: if you see a news article about science, especially if it's on the sensationalist side, keep your guard up and your hopes down.