Is there “Institutionalised Corruption” in Science? (Part 1 of 3)

(March 9th, 2016) Lab Times has received a call for more publication statistics from Peter Lawrence, a well-known critic of bibliometrics. He says that further analysis of publication databases may help to reveal authorship abuse and corrupt publication practices.

Peter Lawrence, the senior Cambridge biologist, has written a number of polemics condemning ways in which senior scientists abuse their authority to steal credit from juniors (e.g. ‘Rank Injustice’, ‘The politics of publication’ and ‘The Mismeasurement of Science’). In Lab Times, he has previously discussed how such practices have contributed to a crisis in the current research system (‘The heart of research is sick’). He recently contacted Lab Times about the ‘Publication Analysis’ articles that regularly appear in the magazine.

“Every other month,” says Peter Lawrence, “Lab Times is publishing a list that, in my view, contains evidence for institutionalised corruption.” The ‘list’ in question is the 'Most Cited Authors'. In it, there is a ranking of the top 30 most highly-cited researchers in a given domain of life or biomedical science. In addition to the total number of citations each author has received for their publications during a 5-6 year period, the table gives the total number of publications on which they were listed as authors or co-authors. And some of the numbers are very big. As Lawrence explains, it is what the numbers on these lists tell us that upsets him – “There are hundreds of so-called top scientists authoring as many as a paper a week, every week of the year, including holidays, and including the presumably massive amounts of time they are out travelling and ‘big-shotting’.” By comparison, he points to a successful and principled scientist, like Eric Wieschaus, who was awarded the Nobel Prize in Physiology or Medicine in 1995. Wieschaus, he says, has been researching full time for more than 40 years, yet has produced less than 200 publications in his scientific lifetime, and usually in association with a group of junior colleagues.

“Many of those guys (in the Lab Times rankings) purport to author more than a paper a week! And I mean guys, there are very few women, even near the top of any of these lists.” In effect, looking back over the publication analyses from LT 1/2014 to LT 1/2016 reveals just 28 females among the 360 'most cited' researchers in 12 different subject areas – that’s just 7.8%. Is this a reflection of research 'productivity' or of a willingness to raise the numbers by any means?

Lawrence worries that modern metrics portray these scientists as “the most successful in their profession” and that our research system may consequently “reward them with more and more grants” to the detriment of more principled researchers.

LT Publication Analysis analysed

Lab Times has been compiling its Publication Analysis since it was first published in 2006. During that time, it has analysed some 38 different subject domains from life sciences and biomedicine. These publication analyses are currently produced by Lab Times editor, Kathleen Gransalke.

The aim of the Publication Analysis is to provide an overview of the research activity in a particular domain based upon the number of publications in that domain and their overall citations during a 5-6 year period. Using a database of research articles, LT compiles data on the total number of articles published in the relevant domain, the countries in which the research was performed (this gives a rough indication of the research activity in that domain in different countries) and the total number of citations associated with these articles, i.e. the extent to which the published articles have been cited by subsequent publications up to a certain cut-of date (usually at least 2 years after publication). 

Each Publication Analysis presents 4 tables of compiled statistical data:

First, by country - the total number of citations for all articles published in specialist journals from each European country (Lab Times is a European magazine). A country’s figures are derived from articles where at least one author working in the respective European nation is included in the authors’ list.

Second, a comparison of the total number of citations for all European countries compared to the rest of the World (USA, Canada, Japan, etc.).

Third, the ranking of the 30 “Most Cited Authors”. LT compiles a list that ranks the star authors in a particular domain by the total number of citations that their publications from the period under study have received.

Finally, there is a list of the five most highly-cited papers from the domain published during the period.

When compiling her Publication Analysis of a particular domain, let's say, basic neuroscience, Kathleen Gransalke starts by finding all the expert journals listed in that category by Thomson Reuters’ Web of Science. This provides the titles of all the publications, the names of authors, their addresses, and the citation data for the articles. “From that,” she says, “we can limit to papers with European authors. From that we can make two lists of about 250 authors each. One with most frequent authors of highly-cited papers and a second list that includes most frequent authors in all expert journals. There's also a third list, which is compiled using different parameters like address (Neuroscience Institute) or search terms (Alzheimer, neuron, glia etc).” All in all, Kathleen says, it takes her about a week to do each analysis.

However, the part of the Publication Analysis that has drawn the attention of Peter Lawrence concerns the list of ‘Most Cited Authors’, or more precisely, the total number of papers that they have appeared on as co-authors. The key question is – What seems like a reasonable, or even plausible, number of publications for a research scientist in a given time period?

Kathleen Gransalke has also remarked on the extraordinarily high numbers of papers authored by some of her star authors. In LT 02/2015, she looked at Cardiovascular & Circulation Research for the publication period 2007-2013. In her ranking of the most-cited star authors, she couldn’t avoid noticing that the top 3 positions were taken by authors with over 500 papers each, namely :

1. Jeroen J. Bax, at the Leiden University Medical Centre (The Netherlands) with 24,151 citations for a total of 579 articles;
2. Patrick W. Serruys, from the Erasmus Medical Centre, Rotterdam (The Netherlands) with 21,758 citations to 562 articles; and
3. Gregory Y. H. Lip, at the University of Birmingham (UK) who had 20,612 citations to his 527 articles.

Kathleen commented that “productivity has been a contentious issue in recent times in academic circles, but you can’t accuse several of our top authors of being unproductive. Over 500 papers in seven years – this equals 1.5 papers per week. One wonders whether the researchers have time to read all the papers they have authored.”

What is Authorship abuse?

As we have discussed previously, there have been more and more calls to tighten the criteria for inclusion as an author on scientific research articles (e.g. ‘True credentials’; ‘Work Enthusiasm or Authorship Abuse? (Part1 and Part 2)’.

Research articles are very important as a formal record of a scientist’s research activity because most research will leave no trace other than these written descriptions of what experiments were performed, the results obtained, and how these studies can be related to everybody else’s research activity and understanding of a particular problem or domain.

Yet scientific research as a whole is expensive, involving the investment of huge sums of public money. This public money is provided because there is an implicit trust that scientists are using the resources well, that they are doing good research work and producing valuable insights that help to advance human knowledge and technology. But is this trust fully justified?

Competition for research careers and funds has become a lot harder during the last few decades. In order to compare researchers and decide who will get the jobs and research funding, committees evaluate the personal publication records of each researcher. This has generated increasing pressures on individual scientists to obtain a better publication record than their competitors. Not everyone can produce highly publishable original data when they need it, and some scientists have been accused of unethical authorship practices to increase their chances of success. By appearing as a co-author on more and more publications, they can claim credit for more than just their own research activity.

Recent codes of good research conduct have equated such practices with fraud. In effect, researchers are misrepresenting the truth of their research activity and are guilty of stealing credit from those researchers who have performed the bulk of the published research e.g. Research Councils UK (RCUK) states that unacceptable scientific conduct includes “Misrepresentation of involvement: inappropriate claims to authorship and/ or attribution of work where there has been no significant contribution, or the denial of authorship where an author has made a significant contribution” (RCUK Policy and Code of Conduct on the Governance of Good Research).

There are different ways in which researchers can 'misrepresent' their authorship on publications. In “Authorship: why not just toss a coin?”, Kevin Strange defines:

Coercion authorship: Use of intimidation tactics to gain authorship.

Honorary, guest, or gift authorship: Authorship awarded out of respect or friendship, in an attempt to curry favour and/or to give a paper a greater sense of legitimacy.

Mutual support authorship: Agreement by two or more investigators to place their names on each other’s papers to give the appearance of higher productivity.

Duplication authorship: Publication of the same work in multiple journals.

Ghost authorship: Papers written by individuals who are not included as authors or acknowledged. This is a situation that has often been linked to commercial interests, e.g. in articles that promote pharmaceutical drugs, or that deny the toxicity of products, such as tobacco. Industry employees write the article and academic scientists receive ‘honorary’ authorship.

Denial of authorship: Publication of work carried out by others without providing them credit for their work with authorship or formal acknowledgment. 

In order to eliminate such abusive practices, there have been calls to make it clear what constitutes an acceptable claim to authorship on a scientific publication. For example, the International Committee of Medical Journal Editors (ICMJE) has formulated fairly strict guidelines to clarify who should be an author on a research paper. The ICMJE recommends that all authors should meet all four of the following criteria: 
1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
2. Drafting the work or revising it critically for important intellectual content; AND
3. Final approval of the version to be published; AND
4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
In addition to being accountable for the parts of the work he/she has done, an author should be able to identify which co-authors are specifically responsible for other parts of the work. Authors should also have confidence in the integrity of the contributions of their co-authors. Meanwhile, any contributors who meet fewer than all 4 of the above criteria for authorship should not be listed as authors, but they should be acknowledged. Other activities that do not qualify for authorship (without other contributions) include acquisition of funding, general supervision of a research group or general administrative support, writing assistance, technical editing, language editing, and proofreading.
Unfortunately, cases of authorship abuse are only usually revealed when there is a research scandal that results in an official investigation of the researchers involved. For example, in 2005 when a high-profile Science paper by South Korean researcher, Woo Suk Hwang, was found to contain fraudulent results, his co-author, US researcher Gerald Schatten, was investigated by the University of Pittsburgh where he worked. In order to avoid a career-destroying charge of fraud, Schatten had to admit that he had not contributed to Woo Suk Hwang’s published research. Although the university agreed that Schatten was not responsible for Woo Suk Hwang’s fraud, they found him guilty of 'scientific misbehaviour', stating that his listing as last author on the fraudulent paper “not only conferred considerable credibility to the paper within the international scientific community, but directly benefitted Dr. Schatten in numerous ways including enhancement of his scientific reputation, improved opportunities for additional research funding, enhanced positioning for pending patent applications, and considerable personal financial benefit.” Meanwhile, Schatten’s only contribution to another paper by Woo Suk Hwang that described the cloning of a dog (published in Nature) had merely been to suggest “that a professional photographer be engaged so that Snuppy (the dog) would appear with greater visual appeal.” Clearly, this does not meet the ICMJE’s definition of an acceptable claim to authorship!

Jeremy Garwood

Picture: Hudson

Last Changes: 04.11.2016