Handling Corrections and a New R-Index
(March 28th, 2017) While scanning through the latest manuscripts on the BioRxiv preprint server, we noticed some fresh new ideas for and observations from the academic publishing sector.
Whether they result from an inadvertent miscalculation, a contaminated cell line or downright scientific fraud, retractions of publications stigmatise researchers all the same. Often, the retraction notice doesn't even state why a study had to be withdrawn and suspicions against all involved researchers grow. That's why Virginia Barbour (COPE), Theodora Bloom (executive editor of The BMJ), Jennifer Lin (Crossref) and Elizabeth Moylan (senior editor at BioMed Central) came up with a better way how to handle retractions and corrections. “We propose a different model that publishers of research can apply to the content they publish, ensuring that any post-publication amendments are seamless, transparent and propagated to all the countless places online where descriptions of research appear. At the center, the neutral term 'amendment' describes all forms of post-publication change to an article,” they say in their recently published preprint.
According to their proposal, these amendments can further be categorised into three different types: insubstantial (for instance, typographical errors or an author-order switch), substantial (for instance, change of authorship, correction of one figure or method or addition of a small amount of data or discussion) and complete (large parts of the article are considered unreliable). Each amendment should include the following details: who is issuing the notice (authors, editors, publishers, institutions), which type it is; links to the publication, the issuing date and additional information about the amendment's background (in cases of complete amendment). Thus, researchers can “publish updates along the way, sharing out the latest findings, analysis, and conclusion. For each version of a paper, an amendment can be issued. In each case, the publisher assigns a new DOI to each of the pieces and deposits the metadata with Crossref so that researchers can cite with clarity and specificity”, the authors say.
“The idea of the journal article as a monolithic object that will stand for all time unless formally retracted has gone. Rather we are seeing calls for articles to be viewed as organic publications or 'living articles',” the authors point out. With their proposal, they hope to “support the dynamic nature of the research process itself as researchers continue to refine or extend the work.”
In a second, recently published preprint, four neuroscientists from Karolinska Institutet in Stockholm read very carefully scientific literature from the last 100 years and came to the conclusion that scientific texts have become harder and harder to understand, also for experts and peers. “Reporting science clearly and accurately is a fundamental part of the scientific process, facilitating both the dissemination of knowledge and reproducibility of results,” the authors say. There are trends towards simpler language, say Plavén-Sigray et al., for instance, in US presidential speeches (Donald, we're looking at you) but in sciences, this trend seems to be going in the opposite direction.
Combing through 707,452 abstracts of papers published between 1881 and 2015, from 122 biomedical journals and employing two well-established readability formulae, which, for instance, count the number of syllables per word, the Swedish neurologists found that “the complexity of scientific writing is increasing with time”. In more detail, they revealed that “more than a quarter of scientific abstracts now have a readability considered beyond college graduate level English”. This was true not only for the abstracts but extended to the entire articles and could be traced back to an increasing use of scientific jargon ("science-ese").
“Our results, combined with the trends in scientific literacy, are worrisome. In addition, amidst concerns that modern societies are becoming less stringent with actual truths, replaced with true-sounding 'post-facts', science should be advancing our most accurate knowledge. One way to achieve this is for science to maximise its accessibility to non-specialists such as journalists, policy-makers and the wider public,” the authors argue and suggest a new metric, the r-index, as a measure for a scientist's average readability. “Such an ‘r-index’ could be considered an asset for those scientists who emphasise clarity in their writing,” the authors conclude.