Is there “Institutionalised Corruption” in Science? (Part 2 of 3)
(March 11th, 2016) In part 2 of our series on corrupt publication practices, Jeremy Garwood looks at whether there are new statistical clues to authorship abuse.
Some authors are credited with vast numbers of multi-author research publications. Such large numbers might be an indication of abusive research practices. Peter Lawrence says he openly doubts the integrity of 'leading researchers' who can author one paper a week over a 5 year period – “There is no way any of these people can take scientific responsibility for the work they are authoring, even if they are all supermen (which I doubt).”
However, far from calling for Lab Times to stop publishing its rankings of ‘most-cited authors’ (discussed in Part 1), he wants us to continue and to extend our analysis to publicly expose the extent to which some of the biggest names in biomedical research are abusing their authorship status - allowing, expecting or demanding that their names be included as co-authors on many publications in which they could not possible have made a truly significant contribution. “I don’t blame Lab Times one jot, in fact I, like other people, find these lists rivetting and hope you will go on compiling them.”
In fact, Peter Lawrence says there are several crucial points to be learned from LT’s Publication Analyses – “One is that being an author and being responsible for the detailed contents and the conclusions in a paper a week is simply unfeasible. In my opinion this is prima facie evidence for a corrupt system. Not only can these operators do this but we laud them for doing it, the system gives them rewards, grants and calls them high-fliers, I am sure some of them have ability, but that is not the point. The second is that they are nearly all men. And I think this is no chance event but that this kind of competitive cheating behaviour is by and large the province of men”, he said, referring to an essay he has written on this matter - “Men, Women, and Ghosts in Science”.
New publication statistics?
In response to Peter Lawrence’s comments, we considered the possibility that publication statistics for scientific research might be used to detect signs of unjustifiably high rates of authorship. We present here some possible suggestions for new publication statistics that might be mined from the publication databases. Re-examining this data using alternative bibliometrics could provide different perspectives on these big publication numbers. Unfortunately, some of these extra statistical measures might require lots of extra work for our LT editor, but if citation indexes like the Web of Science were to systematically introduce such changes to their software, it would obviously make our task a lot easier.
1. Average number of days per published paper
The big total number of papers could be divided by the number of days over which they have been published. This would reveal the average number of days taken to publish a single paper. For example, 100 papers published over a five-year period is equivalent to 20 papers per year. This is equivalent to 365 days divided by 20 or 18.25 days per paper. This is when many hard-working laboratory researchers might ask how, from their own experience, it is possible to actually do the experiments, obtain and verify the results, write-up and submit a research paper in 18.25 days, not just once or twice, but every 18.25 days for 5 whole years (i.e. over a total of 1825 days). However, with Jeroen Bax, our top-ranking author from Cardiovascular & Circulation Research, we have 579 articles published over a 7 year period. Allowing 2 extra days for Leap Years, that still makes 2557 days in total, or an average of 4.4 days per published article!
2. Average number of collaborators per paper
One answer to this might be teamwork - the author whose name appears on 100 papers has not done all of this work on their own (although there may still be some single author articles and this number could also be indicated). However, the majority of the articles have multiple authors. How many other authors appear on the star author’s articles? Using bibliometrics, we can show the average number of co-authors on the 100 papers, for example, it might be 3 to 4 co-authors on average, and the range might vary between two co-authors and seven co-authors (or more). This gives us a better idea of how one star author’s apparent publication success might be dependent on the shared input of many other authors. In this respect, we could also note the total number of different researchers who appear as co-authors on these articles.
3. Number of research institutional addresses per paper
But how many of these co-authoring researchers are in the same laboratory as our selected star author? Who in fact works elsewhere? We can also note the number of different lab affiliations that are included for all of the co-authors. The star author may also have several affiliations - in addition to maintaining the hard-driving rhythm of 18.25 days per paper over 1825 days (or for Bax, 4.4 days per paper over 2557 days), they may also have had time to move to other laboratories during the reference period or may be capable of administering research activities in several different laboratories simultaneously.
4. Distribution of Citation Statistics between papers
What about citation metrics? Citation counts are, of course, at the heart of many controversial rankings of journals (impact factors etc) that have been accused of creating distorted and damaging incentives in scientific research since their first appearance some half-century ago. The inclusion of our star author in Lab Times’ Publication Analysis is based on the large total number of citations to the research publications that they have co-authored. However, we might look at the relative citation rate of these papers, for example, the average number of citations per article, the range of citations per article, the number of articles with less than one citation, less than five citations, less than 10 citations, 10 citations or more etc. Furthermore, we might consider the number of co-authors and laboratories that have appeared on these more or less highly-cited articles. Do articles with more authors attract more citations? For example, might some of these extra citations be coming from the authors themselves through self-citations ("On Self-Citation")?
5. ‘Big Labs’ and the distribution of fair research credit
Other measures that have been used also look at the ‘Big Laboratory’ phenomenon. Some laboratories have a lot of research students, post-docs and technicians, others are more modest in size. We might provide data on these numbers for the star author’s own laboratory. We might also note the number of publications that each of these laboratory workers has shared with the star author. For example, PhD students appear as co-authors on how many publications on average, similarly for postdoctoral researchers, how many papers on average do they co-author with our star author?
6. Research costs per article
Finances also come into play. In theory, the articles will give details of the sources of research funding for the research presented in each article. We might note the number of different sources and the average number of articles per funding source. Unfortunately, we do not have the amounts of money involved for each of the sources. We might however look at some of the larger ones, e.g. large funding agencies usually provide details of total funding for large grants, and the websites of host institutions may provide sums for the larger research grants awarded to their star authors.
7. The sex ratio
As Peter Lawrence has remarked, the LT Publication Analysis consistently reveals a male-dominated rankings list: only 28 women were among the 360 'most cited' researchers in the 12 subject areas analysed in LT 1/2014 to LT 1/2016. Is this a reflection of research 'productivity' or of a willingness to raise the numbers by any means? “At present, in the competition for academic posts, we expect our candidates to go through a gruelling process of interview that demands self-confidence. We are impressed by bombast and self-advertising, especially if we don’t know the field, and we may not notice annexation of credit from others, all of which on average are the preferred province of men”, he noted in "Men, Women, and Ghosts in Science".
What kind of pattern might emerge?
The total funding per star author laboratory? The average funding per article?
Obviously, there is the danger that a superficial reading of high publication numbers and laboratory funding might suggest a greater ‘value for money’ - more published articles at a lower average cost per article. But we ought to consider the additional funding from our other lab affiliates - if the funding of the co-authors’ laboratories is added to the equation, would the cost per paper look to be such great value for money? And citations - might the most-cited articles be seen to be better value for money?
As with so many bibliometrics, the sheer variety of researchers, research topics, research goals, and relative success on different projects means that simple numbers do not provide anything approaching the whole picture of a laboratory or a star author’s total activity, let alone that of a whole area of research.
However, it does provide further evidence of abusive behaviour by some researchers who, in a highly competitive research environment, have taken advantage of a lazy tendency for universities, politicians, and funding agencies, to assume that the co-authorship of large numbers of research papers is a direct reflection of the ‘bigger’ quality of a researcher’s competence and skill as a scientist and to reward those who know best how to exploit these tendencies to see big numbers and not to look at the details.
In fact, there is a term for this tendency to give yet more money and resources to those who already have more than their fair share – the Matthew Effect - formulated in 1968 by the great American sociologist of science, Robert Merton. It describes the phenomenon where “the rich get richer and the poor get poorer” and is named after the Gospel of Matthew in the Bible – “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath” (Matthew 25:29, King James Version).
Merton found that eminent scientists get disproportionately greater credit for their contributions to science while relatively unknown scientists tend to get disproportionately little credit for comparable contributions: “The Matthew Effect consists in the accruing of greater increments of recognition for particular scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark.”
And it probably lies at the heart of some of the inflated authorship numbers revealed in the LT publication analyses – committees for jobs and funding tend to be easily impressed by big numbers of publications, citation rankings, and exaggerated claims. After all, it’s easier than actually thinking about the quality of the research that is being performed.