Time for a Retraction Penalty?

What’s behind paper retractions? (24)
by Adam Marcus and Ivan Oransky, Labtimes 03/2014



Paperbasket
Photo: zettberlin / Source: PHOTOCASE

Journals are happy to boast about their impact factors. What if citations to retracted papers were excluded from those calculations?

As we write this in mid-August, Nature has already retracted seven papers in 2014. That’s not yet a record – for that, you’d have to go back to 2003’s ten retractions, in the midst of the Jan Hendrik Schön fiasco – but if you add up all of the citations to those seven papers, the figure is in excess of 500.

That’s an average of more than 70 citations per paper. What effect would removing those citations from calculations of Nature’s impact factor – currently 42 – have?

Science would lose 197 citations based on this year’s two retractions. And Cell would lose 315 citations to two now-retracted papers.

In other words, what if journals were penalised for retractions, putting their money where their mouth is when they talk about how good their peer review is? Clearly, if a paper is retracted, no matter what excuses journals make, peer review didn’t work as well as it could have.

Levelling the playing field

There’s evidence, in fact, that Cell, Nature and Science would suffer the most from such penalties, since journals with high impact factors tend to have higher rates of retraction, as Arturo Casadevall and Ferric Fang showed in a 2011 paper in Infection and Immunity. (The New England Journal of Medicine had the highest rate of all and also the highest impact factor.)

Perhaps a retraction penalty could even start to level the impact factor playing field. This is, to be fair, a bit of a thought experiment. Given how impact factors are calculated – based on citations over the previous two years – it’s unclear just how this kind of penalty would affect the metric in real time. More likely, given the age of some of the papers being retracted today, there would be adjustments to older impact factors, rather than the most recent ones that journals usually trumpet.

The impact factor, say many scientists, is overused anyway. And just as any metric can be gamed, a retraction penalty could easily discourage retractions, which are already too difficult to obtain from some stubborn journals.

But that opens the door to thinking about ways to encourage transparent corrections of the scientific record, including retractions. Think of the incentives as reputation points, rather than impact factor adjustments. What if journals earned impact reputation points for clear retraction notices? What if they earned similar points for responding quickly to questions about papers they’d published, instead of dragging their feet?

Building reputation

Journals might also get points for raising awareness of their retractions, in the hope that authors wouldn’t continue to cite such papers as if they’d never been withdrawn – an alarming phenomenon that John Budd and colleagues have quantified and that seems to echo the 1930s U.S. Works Progress Administration employees, being paid to build something that another crew is paid to tear down. After all, if those citations don’t count toward the impact factor, journals wouldn’t have an incentive to let them slide.

Scientists already seem, at least unconsciously, to reward coming clean about honest errors. A study in Scientific Reports last year with the same title as this column hinted at that: “These broad citation penalties for an author’s body of work come in those cases, the large majority, where authors do not self-report the problem leading to the retraction. By contrast, self-reporting mistakes is associated with no citation penalty and possibly positive citation benefits among prior work.”

Maybe scientists don’t need metrics to do the right thing, after all.



The authors run the blog Retraction Watch: http://retractionwatch.com



Last Changed: 16.09.2014




Information 4


Information 5


Information 6