Science is an activity performed by humans, so it's inevitable that some of the scientific papers we cover will end up being wrong. As we noted yesterday, the cause can range from factors completely outside of a researcher's control—like OS implementation oddities—to mistakes and errors or even intentional fraud. In some cases, the problems are minor or peripheral to the main conclusions of a study and can be handled with a correction.
In others, the issues are fatal to the paper's conclusion. In these cases, the only option is to retract the paper.
When Ars discovers that a paper we've covered has been retracted, we make an effort to go back and provide a notice of it in our article. But until recently, we didn't have a formal policy regarding what that notice should look like, and we typically didn't publish anything new to indicate a retraction had occurred.
Having given it some thought, that practice seems insufficient. A failure to prominently correct the record makes it easier for people to hang on to a mistaken impression about our state of understanding. Perhaps more importantly, not reporting a retraction leaves people unaware of a key aspect of science's self-correcting nature and how retractions can sometimes actually advance our scientific understanding. This is definitely apparent in the contrast between two retractions that we'll revisit today.
The first retraction dates from back in 2016, when researchers published data indicating that fish could develop an appetite for microplastics. But shortly after the paper's release, accusations were made that the paper was based in part on fraudulent data—data that one of the researchers had simply made up rather than obtained through study. Those accusations set off an investigation at the researchers' home institution, Uppsala University.
We continued to track the situation until the investigation concluded that there had been misconduct and there was an indication that the paper would be retracted. At that point, we added an editor's note to the top of our original piece. We took no further action when the paper was formally retracted. With our new policy regarding retractions in place, we've gone back to alter the title and insert some introductory text as well.
What's the value of this sort of retraction? Aside from it being obvious that the people involved didn't follow our suggestions to budding fraudsters, this kind of retraction isn't especially useful from a scientific standpoint.
But it's still a valuable lesson about science as a whole. Fraud like this highlights how science and peer review are largely built on a fundamental trust in the honesty of other researchers. While some forms of data manipulation can be detected without access to the original experiments, that tends to be the exception rather than the rule. At least in this case, science had some people who were trustworthy—the ones who triggered the university investigation in the first place.
The second retraction we're noting is much more current, having been published on Monday. We covered the original paper back in June. The science of the reported results was interesting: researchers reported that people who carried mutations that protect them from HIV infection suffer from reduced life expectancy due to the mutation's other consequences.
But it was the social context that made the news so important: the mutation in question was the same one that a researcher had engineered into the world's first gene-edited babies.
Other researchers who saw the report, however, got in touch with the authors to inform them of a potential problem. The research was based on data from a resource called the UK Biobank, which relies on software to figure out the genetic variants its participants have when fed data from DNA chip studies. It turns out that there was a poorly kept secret among human geneticists: the software has a slight bias in how it interprets that data. Other researchers had seen this when analyzing their own data, but the problem wasn't pronounced enough to justify an entire research publication about it.
When informed of this issue, the two researchers behind the analysis (Berkeley's Xinzhu Wei and Rasmus Nielsen) went back and reanalyzed their data, and they found that the effect they had seen was a result of this technical glitch. As the retraction notice states, "Because the main conclusion of the paper is invalid, both authors, Xinzhu Wei and Rasmus Nielsen, wish and agree to retract the Brief Communication in its entirety."
Not only is this an example of ethical behavior for other scientists, but it plays a valuable role in making researchers aware of the problems with UK Biobank data. So a high-profile retraction like this can play an important role in advancing science.
Those of you who follow the steady flow of scientific foibles immortalized at Retraction Watch will know that it's often not possible to find out why some papers are retracted, much less draw larger lessons from the retraction. But we'll do our best to continue to keep you notified when the research we cover doesn't hold up to critical review by the rest of the field.
Of course, we have to know about the retraction for that to happen. If you see that we've missed a case, please make sure to get in touch and let us know. Either email me directly, or use our contact form.