(June 3rd, 2016) The London-based open access publisher BioMed Central is currently testing a text-mining application called StatReviewer. The software is expected to free the reviewers of clinical trial manuscripts from the more unpopular tasks.
In recent years, peer reviewing has earned mixed comments. In 2013, science journalist John Bohannan successfully submitted methodologically flawed, spoof research manuscripts about lichen-derived anticancer wonder drugs to 157 open access journals, thus exposing weaknesses in the review process. In 2014, a retrospective study on the impact of open peer reviewing in biomedical journals showed that reviewers often missed deficiencies in the reporting of the methods and results of randomised clinical trials. However, most changes requested by peer reviewers, improved reporting in the final version according to the demands of the CONSORT statement. The guideline, which is endorsed by leading medical journals, contains a 25-item checklist with recommendations for reporting on the design, analysis and interpretation of randomised clinical trials. It also provides a template for a clearly structured flow diagramme, which summarises enrolment of participants, allocation to interventions, follow-up and analysis.
Only when a clinical trial is fully reported, can the reviewers make an informed decision, regarding the validity and reliability of the results. Checking compliance with reporting guidelines, though, seems an ideal task for computer-aided review. “StatReviewer parses each manuscript into sections and analyses whether key elements from the CONSORT statement are present, for example whether the study was blinded or whether the eligibility criteria for participants have been mentioned. The programme also checks for the appropriate use and reporting of p-values within the manuscript. However, it makes no judgement regarding the suitability of the methods,” explained Daniel Shanahan (photo), Associate Publisher at BioMed Central. The software is being developed by Timothy Houle, an Associate Professor at the Wake Forest School of Medicine, Winston-Salem, North Carolina, and by Chadwick Devoss from the company Next Digital Publishing, Madison, Wisconsin.
StatReviewer is now on test, using clinical trial manuscript submissions to the BioMed Central journals Trials, Critical Care, BMC Medicine and Arthritis Research and Therapy. “We have been working with the developers for the last two to three years,” Shanahan said. In trial runs with a series of published articles, the software has performed very well. “However, pre-review manuscripts present a host of challenges compared to published articles. They are less structured and have inconsistent formatting. There are also issues around the use of the English language and potential typos and mistakes, which are usually removed before publication,” the Associate Publisher observed. “Moreover, different fields of medicine use different terminologies. A report of a clinical trial in anaesthesiology, for example, will be worded very differently from one concerning musculoskeletal disorders. This makes the digital matching of synonyms exponentially more complex”, Shanahan noted. The interpretation of the findings in the context of wider research also presents particular challenges and might, in the end, be better left to the human reviewer.
The manuscript revisions suggested by StatReviewer are meant to complement the traditional review process by editors and peers, not to replace it. “We will be evaluating the StatReviewer report to see if it correctly identified all the missing and reported information correctly. We will also be comparing this to the anonymised ‘human’ reviews, to see if this represents an improvement over the current situation…”, Shanahan wrote in his blog. “One of the most interesting questions this poses is how authors will respond to an automated review,” he added. Editors are already using digital means to detect plagiarism and image manipulation.
BioMed Central, which published around 15,000 articles in peer-reviewed journals in medicine in 2015, also leads cross-publisher innovations such as the Linked Clinical Trials project. Via the CrossMark dialogue box, readers can now view all documents related to an individual clinical trial such as the study protocol, publications, commentaries, secondary analyses and systematic reviews, which are published in different places and perhaps years apart. The new tool helps editors, reviewers and researchers more quickly to recognise bias and selective reporting in a study.
“While there are a huge number of developments and initiatives in peer-reviewed publishing coming to the fore, the single most fundamental aspect for making changes is the researchers themselves. They are the authors, the reviewers, the editors and the readers, and any improvements will need their buy-in,” Shanahan stressed.
Photos: Fotolia/kirill_makarov (robot), Daniel Shanahan