Strategy for Success
(November 22nd, 2016) Comparing scientists' lab behaviour to natural ecosystems, UK researchers revealed that science's “current incentive structures are in conflict with maximising the scientific value of research”.
“Publish or perish” is something you hear in every lab. Every researcher knows their career depends on their publication record. And it’s not just how many papers you can publish, it’s where these papers are published. The higher the impact factor, the better.
The idea that papers with a high impact factor mean good science seems logical but there is growing concern amongst the scientific community that this may not always be true. Andrew Higginson, from the University of Exeter, UK, and Marcus Munafo, based at the University of Bristol, UK, believe this drive to publish novel ideas increases the risk of producing underpowered studies and erroneous conclusions.
In a study published recently in PLoS, the duo compared scientists' research strategies to natural ecosystems. The idea may seem far-fetched but, in nature, animals need to find the behaviour that maximises their fitness and chances of survival. At a closer look, this is not so different from scientists adopting a particular strategy – “publish or perish” - to maximise their chances of survival (in the scientific community). “Science is a complex system and ecological approaches are well-suited to understanding complex systems,” explain the authors. “Scientists have to decide what proportion of time to invest in looking for exciting new results, rather than confirming previous findings, and they also must decide how much resource to invest in each experiment.”
Using this model, in which scientists need to decide whether to pursue new ideas or confirm previous studies, Higginson and Munafo found that the number of incorrect conclusions increases with the weight put on novel findings. In other words, researchers are much more likely to reach inaccurate conclusions, if pushed to constantly publish new ideas in high impact factor publications.
The good news is that there are ways to deal with this problem and it doesn’t need extremely dramatic measures. “In the UK, there is an emphasis on the 'quality' of very few papers. In some other countries, overall productivity is important, so an immediate change could be to take into account the gross number of publications,” say the authors. “Our results suggest that there would be substantial benefits to simply reducing the magnitude of the weighting of 4* papers for determining funding, and taking into account 1* and 2* papers.”
Higginson and Munafo believe journal rankings and impact factors remain valuable, as there is still a need to quantify research output. However, the authors believe it’s urgent to give junior scientists the space to develop ideas, by allowing them to do their own work, rather than expect them to hire postdocs. “Currently, most grants fund the salary of a postdoc, rather than buy out the time of lecturers to do the research themselves. The system is illogical anyway, because scientists get tenure largely on the back of their research; but once they get tenure, they no longer have time to do the research themselves but become research managers, so their (proven) talent is wasted.”
It’s safe to say that every researcher would like to see things done better. However, the problem is that, under this approach to maximise impact factor, those who make the first move run the risk of being punished, by being less competitive when applying for jobs or grants. For the authors, “a wholesale change is needed and this will have to be at least partly imposed top-down, by institutions and funders, because scientists are engaged in a perilous game-theoretic situation where unilateral change may be foolhardy”.