Lucky Outliers

Don’t be misled by citation figures! (24)
by Ralf Neumann, Labtimes 04/2010

Journal Tuning

Joseph Cooper had already been retired for a couple of years and looked fondly back on decades of a satisfying research career. He had certainly achieved a lot. In particular, a whole number of players and mechanisms crucial to the structure and dynamics of the cytoskeleton had been revealed by his lab.

“Oh yes, back then research certainly was a lot of fun,” Cooper often thought. And in these moments he always felt that he was lucky to be able to end his career at exactly the right time. “Rat Race”, “Publish or perish”, “Apply or die” – all these nasty buzzwords, characterising large segments of the current research business hadn’t been around until the very end of his career. The same was true for that increasing lunacy about citations and impact factors.

It was the latter phenomenon, in particular, which he didn’t understand. At the end of the day, one of the foremost qualities of a scientist should be to analyse data as carefully as possible and to interpret them just as thoroughly and critically. “Why then do so many researchers forget these principles when it comes to citation counts and impact factors?” Cooper repeatedly asked himself.

In particular, the “ranking lists” of Thomson Reuters – the institution that monopolistically counted, archived and analysed citations – were a thorn in Cooper’s side. However, it was not only the fact that in this way papers were compared, which simply couldn’t be compared – for example, apoptosis articles with studies on plant secondary metabolism or clinical trials on psoriasis treatment. No, there was another thing, which constantly narked away at him…

It was just a couple of months ago that Thomson Reuters had published a list of the most highly-cited scientific papers of all time. Of course, the two well-known “methods papers” on protein determination by Oliver Lowry et al. and SDS polyacrylamide gel electrophoresis by Karl Ulrich Laemmli were ranked at the very top. Decades later, as luck would have it, their methods were still being broadly applied – almost unmodified. And thus, up until today their papers have still not stopped being cited excessively. Take the Lowry paper, for example: although published as early as 1951, the article stated that it still collects about 10,000 citations a year.

This is not the rule of thumb, as Cooper knew only too well. In 1969, he had also written such a methodical “high-flyer paper” about the determination of molecular weights in SDS-polyacrylamide gels. But this paper “suffered” the more usual fate: for about ten years it was cited, cited, cited; then the citation rate suddenly dropped and twenty years – or 20,000 citations – later it only rarely appeared in any reference list.

This, however, was not because molecular weights were no longer determined in SDS-gels. Neither had anyone developed a better method. No, Cooper’s method had simply become a matter-of-course in the everyday world of experimentation. As, for example, the adjustment of pH values. And “a matter-of-course” doesn’t need to be cited anymore.

“Of course, it’s right and proper this way,” Cooper thought. “Otherwise, the reference lists would, one day, be longer than the articles themselves. What about Watson and Crick? Nobody still cites them when writing something about the structure of DNA.” However, for some reason that remained elusive to Cooper, now and again there were outliers to this scheme. Lowry and Laemmli are prime examples. And, whereas this fact was of minor annoyance to Cooper, it greatly put into perspective what it meant to be reputed as author of the “most-cited paper of all time”.

Last Changed: 03.05.2012

Information 4

Information 5

Information 6