This site may earn affiliate commissions from the links on this folio. Terms of apply.

What happens when someone publishes a breakthrough that other scientists can't reproduce?

Faux-science press releases hyping the next could-be breakthrough litter the Internet. Every other day, it seems, there'due south another entrepreneur bravado bubbles of venture capital out of the techno-optimist fog blanketing Silicon Valley. There's an endless stream of claims, and so much of it turns out to be vaporware, based on experimental results that got way overblown in one case they entered the media repeat sleeping accommodation. All besides oft, the thought is cracking, simply the science but isn't in that location to back information technology upward, and so the product never appears.

Nature News did a survey of its readers, including more i,500 researchers, to find out what they thought most the trouble of replicating others' work and getting discordant results. When information technology comes to scientific reproducibility, the scientists concur: Houston, nosotros accept a problem.

reproducibility-graphic-online4The survey reveals sometimes-contradictory attitudes among scientists. More than one-half of those surveyed agree that there is a significant "crunch" of reproducibility, less than a 3rd remember that failure to reproduce published results ways that the published consequence is fake, and well-nigh say that they yet trust the published literature. But there'south a wide range of problems that contribute to irreproducible inquiry, from unclear or undisclosed methods to cherry-picking data, bad luck, or outright fraud.

And the problems vary by field. The laws of physics announced to vary the least, since respondent physicists consider the corpus in their field to be very reliable. In squishier fields similar medicine, though, literally not a single respondent agreed with the idea that the whole torso of published medical enquiry is trustworthy. The event is that doctors don't believe the crap you see on Dr. Oz, and neither should you.

Sorting out actual discoveries from fake positives tin be really hard. When an experiment can't be reproduced, why can't it exist reproduced? How much of the difference boils down to a hypothesis actually being false, as opposed to different humans in different labs doing their slightly different interpretations of a procedure on dissimilar equipment?

Perhaps surprisingly, the overwhelming bulk of respondents to the Nature survey cited a meliorate agreement of statistics equally the number one thing that would enable better reproducibility in experiments. What this ways is that even the scientists reporting the data don't always accept a very deep understanding of the math they're using to analyze that data. Information technology's easy to intentionally mislead with statistics. It'southward even easier to accidentally mislead with statistics when you're trying to explicate something you lot don't understand all that well yourself.

One of the other major problems is that we just oasis't really been controlling for this. It seems obvious in hindsight: If you do things in a rigorous, scientific manner, others will exist able to reproduce your results. But between pressure level to publish, financial constraints, and too few optics on a given body of piece of work, information technology's very like shooting fish in a barrel to give in to selective reporting of data. When funding is at stake, data tends to nucleate around points that ostend the desired thesis. Function of doing science is against the horrifying truth programmers already know: that no matter how terrible your lab is, everything else is exactly this hacked together, and the people who did information technology knew exactly as little as y'all, probably on a upkeep just as tight as yours. There is no huge conspiracy. Reproducibility just dies by a chiliad cuts.

Forewarned is forearmed

reproducibility-graphic-online5It might seem simplistic, but the ane thing scientists agreed on in chorus was that it's time to start building reproducibility steps into experiments during the planning stage. Trying to verify your own results is hard; if yous've already fabricated an mistake, chances are y'all'll also overlook it when sanity-checking your work. But there's a style around this. Pre-registration is a strategy where scientists submit hypotheses and plans for data analysis to some independent third party, getting an outsider'due south eyes on the game plan before they e'er practice the experiments. This is intended to tighten up experimental design, and to preclude cherry-picking data after.

At the centre of the trouble, though, is human nature.

Wishful thinking, combined with the pressure to perform and produce, leads usa to indulge conventionalities in what we hope is true. Do you remember cringe-laughing when The Onion joked about calculation the "seek funding" step to the scientific method? The fact that scientists have to beg and compete for funding, introducing marketing into research, is the reason nosotros go debacles like Theranos, a Silicon Valley medical startup whose disruptive claims gathered huge amounts of venture capital but seem to be vaporware. It's easy to focus on what we want to see — and this is simply as true for laymen every bit veteran STEM researchers. In the end, it looks like Reagan had it correct: Trust, but verify.