Lab Times Summer Read (6) – “The Heart of Research is Sick”

(July 28th, 2017) Digging deep into our archive, we found quite a few gems from the past, worth a second read. Here's a 2011 interview with developmental biologist Peter Lawrence about the 'broken' research system.

About ten years ago, you began publishing the first of a series of articles criticising the way in which the scientific research system is organised and the direction it’s taken. What motivated you to publish your first article, “Science or Alchemy?”, in which you condemn the ‘alchemy of spin’ that has crept into research articles?

That’s an interesting question. Really, what started me on this was something else. When my PhD supervisor, Sir Vincent Wigglesworth died, I wrote an obituary in Nature together with another former student of his, Michael Locke. We called it “A man for our season” and explained Wigglesworth’s approach to science and his ideas about putting research first and administration second. I was also asked to give the first Wigglesworth Memorial lecture at the International Congress of Entomology. I talked mostly about Wigglesworth’s scientific work but, at the end, I put in a ten minute section on his scientific style – how he saw what was going wrong with modern science and how he differed from the way things are done nowadays. For example, he gave his students complete independence and did not put his name on their papers. He supervised ‘by example’ – he just went off and did his own research.

I got such an overwhelming response, I realised that there was a need for a voice to express the frustration that many scientists felt, particularly young scientists, about what was happening to science. Since then, the trends that I picked out have continued, getting worse and worse and worse, until the whole fabric of science and the way we do things has become corrupted. There are many problems. Some are more interesting than others.

Essentially, it’s the publication process. It has become a system of collecting counters for particular purposes – to get grants, to get tenure, etc. – rather than to communicate and illuminate findings to other people. The literature is, by and large, unreadable. It’s all written in a kind of code, with inappropriate data in large amounts, and the storyline is becoming increasingly orchestrated by this need to publish. We all know it. We all suffer from it. I think the changes to the scientific enterprise have been inexorable and progressive. The deterioration has been so steady that people don’t really realise how much things have changed.

You wrote about the publication system in ‘The Politics of Publication’, criticising the attitude of the editors. At that time, you’d already been a journal editor for more than 20 years. Do you feel in some way responsible for how things have changed? Were you carried along by this movement?

I guess I should share some responsibility. But I did try to resist it. Development is an unusual journal because its editors are all professional scientists, who are still working; most of us in full-time research enterprises of our own. Their perspective on science is different. When I started, there were hardly any young professional editors. Now, most of the journals are managed by professional editors, most of whom have chosen editing rather than research, or who couldn’t go on in research because they didn’t have enough competitive advantages. The power structure of scientific publication has moved more and more into their hands. They are partly to blame for what’s happened, they and those who try to measure everything.

Those who measure us are using publications as a means of assessment. I think measurement, assessment and evaluation lie at the heart of the problem. Once you start counting papers, scoring journals and measuring impact then the purposes of publication change.

What about the ‘Misallocation of Credit’ and the ‘Rank Injustice’ of the research system?

The article ‘Rank Injustice’ was to do with how credit is distributed in the scientific world. The basic rule is that credit always flows upwards. If you’re a student, your supervisor will get the credit. If you’re a group leader, your department head might get credit, for example, in the research assessment exercise for rating UK universities. You don’t get rewarded for having discovered something yourself. I think that has a poisonous effect. It encourages too many scientists to steal credit, to annex the discoveries of the young. To keep on top of the young people working for them, so that they can claim to have been involved and garner the credit for it.

It’s become so built-in that people think that if somebody does something on their own, there’s something slightly suspicious about it. A friend of mine went to a ‘big shot’ meeting, where the talks were mostly from people with large groups, presenting work from their groups. But one person in that meeting presented his own work. That evening, my friend overheard the big shots sitting around in the bar, trying to pour some kind of suspicion on this speaker; how could somebody do their own work? They said it removed the checks and balances, which you always have between students and their supervisors. I find that argument to be completely self-fulfilling ‘hokum’. It’s a way of making sure that what you do is somehow justified because, actually, your job as a supervisor is to educate, not to take credit.

In a better world, as my mentors Wigglesworth and later Crick taught me, one’s career was built on one’s own contribution. Wigglesworth helped us in the same way that any senior person should help apprentices. But this has all changed. The career of most scientists now depends on the success of their juniors. There’s a reward system for building up a large group, if you can, and it doesn’t really matter how many of your group fail, as long as one or two succeed. You can build your career on their success.

Does this diverge from the publication problem? Do we have two separate issues?

Yes, but they’re connected because you get credit for your publications. The pressure is very high on you to make sure you get your name on those publications. You have situations where there are, for example, two postdocs from different groups in a big institute – they meet and hatch a project together, do it, and it all looks very promising. Then, their supervisors, who really have nothing to do with the conception of the project, will get involved – they will put their names on things. The two people who actually did the work will be two junior authors that have to carry with them at least two senior authors – as a sort of baggage. Then look how it’s perceived by the world. It’s considered to be the work of the senior authors’ big groups. And this is a travesty of the truth. I’ve come across this quite often. Supposing I don’t put my name on one of my postdoc’s papers but this person has collaborated with another postdoc from another group. When the paper comes out, the only senior person on the paper is the one responsible for the other postdoc and my name doesn’t appear. Then when it gets looked at by bibliometricians and others, it is scored as if it’s come from the other group.

I find that very irritating because it isn’t the truth. So, progressively, one is rewarded for making sure that one’s name is on a paper even though one may have done next to nothing. Generally speaking, I don’t put my name on my graduate students’ or postdocs’ work, unless I have been actively involved. A while back, it wasn’t so weird but now it’s considered to be terribly odd. Also, of course, one suffers a bit because of the bibliometricians – if you’re not on the paper, you don’t get counted.

In ‘The Mismeasurement of Science’ you criticised the H-index. Is this the worst example of the trend to equate scientific publications with productivity?

The H-index is a measure of citations, not the number of papers. All citations count more or less equally in the H-index. I would take the view that citations are marginally better, when assessing the value of a paper, than adding up the impact factor of the journal in which the paper was published. At least, it means that if you publish a paper that other people want to cite, in any journal, you get credit for it through the H-index. So, it is a slight improvement.

I know that the English systems of measurement are going over towards citations as a way of assessing scientific productivity. But this is absolutely riddled with problems. For example, if you’re doing research in a small field then, even if everybody in the field cites your paper, you still won’t get many citations. But if you work in a big crowded field, you’ll get many more citations, particularly if you publish in a prominent journal. And this is independent of the quality of the work or whether you’ve contributed anything. This puts enormous pressure on the journals to accept papers that will be cited a lot. And this is also having a corrupting effect.

Journals will tend to take papers in medically-related disciplines, for example, that mention or relate to common genetic diseases. Journals from, say, the Cell group, will favour such papers when they’re submitted. At Development, we tried to resist this trend. We published papers dealing with small obscure fields, like flatworms. People published papers about flatworms in Development because they couldn’t publish them elsewhere. But they don’t get many citations and the impact factor of Development suffers. Then the people in Development’s head office would say we should have a higher impact factor and that we must be more careful about the kind of papers we’re accepting. We’ve got into a situation where the measurers drive the science, rather than the measurers being there to quantify the scientific effort or achievement.

Publications now have such a high value because of this number attached to them. With this number, not only do job prospects improve but also the chances of getting grant money. One of the solutions you’ve proposed calls for granting agencies to change their whole philosophy when judging the quality of scientists.

Yes, I made suggestions about what granting agencies should do. This may be the direction in which the Howard Hughes Medical Institute is moving. They’re now asking people to submit only a small number of publications for assessment from the previous five years. I think this is a tremendous leap forward because it will remove the pressure on scientists to produce large numbers of papers. This change will improve the quality of the scientific literature but it may make it less straightforward for young scientists to get recognised.

For a start, young people may not always get a paper because they may not, by bad luck or whatever, have contributed to one of the five papers being assessed; one that’s thought worth publishing by the head of the group. That doesn’t necessarily mean that they’re not so good but they can’t contribute to the assessment with a first author paper of their own. The single, simplest thing that the granting agencies could do is to look backwards, when possible, rather than forwards.

The system we have now is counter-productive, wasteful of time and energy. We get people to write a piece of fiction about what they’re planning to do. It’s a kind of intellectual exercise – sometimes it relates to what they actually do, sometimes it doesn’t. It’s a sort of game we have to play to get a grant. We put all this stuff down, we show that we are competent intellectually and technically. By the time the grant is awarded, maybe a year later, and you can finally start the research, everything has changed – we might be doing something else.

The Wellcome Trust is very good about this. They realise that scientists can’t predict what they’re going to do and they let people move away from what they’re actually funded for. Unfortunately, some of the other grant agencies consider it more like a contract, which is not what research is about. If you know what you’re going to find, you’re just not doing research.

There are many ways in which the granting agencies could change the system. One thing I’ve mentioned is about the shortness of the Fellowship. Both the postdoctoral fellowships and the grants are far too short. In order to save money, I guess they’ve reduced the period of grants but this is counter-productive. I discussed the consequences in my recent article, ‘Real Lives and White Lies in the Funding of Scientific Research’. I described what happens to young scientists when they get their postdocs, which are usually limited to two years. In that two-year period, they are expected to start what is often a new line of research, and to have produced and got published a paper in a major journal, by say, at the latest, 18 months, so that they can apply for another grant. Who can do that? They may need another postdoc to get somewhere but there are very few of those. They are really in a bind. I see this time and time again.

You cite your own experience of writing what was effectively your first grant application just a few years ago. As a staff scientist at the MRC, you didn’t need to apply for grants?

Wasn’t I lucky! It’s a much better way of funding sciences. If you want to fund researchers for a couple of years, you don’t want them to spend 30-40% of their time using all their intellectual and emotional energy looking for other grants. But that’s what the present system is doing to scientists and researchers. They haven’t got the emotional and intellectual energy left to concentrate on discovery.
I’m afraid you have to gamble with research. You have to give somebody enough money and enough peace of mind to get on with it. If at the end of five years they haven’t done much, then you end the grant. That’s the way to do it. To look backwards, to see what they’ve achieved and not worry about what they say they’re going to achieve because it is all fiction anyway.

You describe some of the advice you received when writing your first grant, that you shouldn’t tell the truth about what you’re really going to do?

They were wise. Your grant is going to be read by lots of people who are all specialists in your field. If you’re in a small field, you might well know who they are, but you’re telling them exactly what you’re planning.

What about a Code of Ethics? For example, you’re saying that for reviewers who are very unethical, who are stealing results and blocking publication, we need to be able to do something to control or punish them?

I think we need to do something to chastise and control people. Some kind of ‘police force’. Most scientists behave very well but people under pressure are tempted to take advantage of things they pick up. They may well go to meetings, for example, and learn something new from a competitor and be able to change what they are writing to put the new finding in. There’s a lot of this going on. At least people think so, and this helps generate an atmosphere of paranoia.
People are very defensive and unwilling to talk about what they’re doing, which means the whole purpose of the meeting, to share and help each other, is lost. People nowadays only talk about something that is just about to come out or has already been published. They daren’t talk about their new stuff. We can change that system by making people behave better.

There are a lot of organisations worldwide who deal with ethics, for example, COPE (the Committee On Publication Ethics), and the recent World Conference on Research Integrity in Singapore. From these meetings, they produce a very sensible statement about how things should be done in science. And what should not be done. They are very well written. Various US universities and the NIH have their code of ethics. These are also written down and carefully worded – but there’s nothing about enforcement. If some person feels their work has been plagiarised, that somebody has stolen something they have not yet published, where can they go? The only place is the civil courts. And this is very difficult and expensive. These aren’t really criminal offences, they are scientific and ethical offences. But there’s nowhere to go. So, instead of having all these organisations producing these finely-worded statements, they should put some teeth into them. One way might be to appoint a scientific ombudsman, who would have the power to name and shame.

I don’t think these organisations realise how powerful the Web is. In the old days, there was no way of shaming anyone in the public domain. But if there was an officially approved and valued ethical chief, like an ombudsman or a small committee, then if somebody had a really good case, it could be judged by that committee and the judgment could be put out on the Web. People would see that they get into trouble and that their reputation would suffer.

It’s quite the opposite at the moment: if you publish something, no matter how you’ve stolen it, no matter how you’ve obtained it, your reputation will be enhanced.

Didn’t this happen to you with the Axelrod group from Stanford University and their Cell paper about intercellular polarity signalling?

Yes, I felt that this paper had not made proper reference to our previous work, that they had essentially republished the most important of our findings without making it at all clear that we had published them four years previously. My job was not on the line and I was not subject to the pressures that many young people are under; that is, if they make a fuss they’ll worry about getting their next grant. So, we decided to be tough about it. With the help of other scientists not acknowledged in the paper we went to Cell. I asked them to publish a short review that would explain the history of this particular field. Cell refused to discuss it. They were very disdainful and refused to consider the possibility that there might be a problem. So we published our views in Current Biology.

We did something about it because I know from talking to other scientists that many people feel there is a growing irresponsibility with citations in journals, of not giving credit to others. There were a couple of articles about this matter in The Scientist magazine and elsewhere, and an online conversation with Jeff Axelrod in Current Biology that people can read.

But I think I was in a very strong position there. My complaint did not depend on anything that was unpublished. Anyone can now go and look at the two papers and make their own mind up as to how they judge our complaint. Are we right or not?

We should all get together and set up a little system of enforcement of these ethical principles. I think in any society, things don’t work without some sort of policing. It would also be a good way of spending some of the money these ethical organisations use without achieving very much.

In ‘Men, Women and Ghosts in Science you tackle the notion of men and women in science from a biological viewpoint. You say there are men and women, male brains and female brains, but that the actual characteristics underlying what we would identify as masculine qualities and feminine qualities can be fused in men and women in different proportions. Then you argue that the scientific system has been pushed over towards a very masculine, aggressive stance, where we’re encouraging people who are insensitive to others and aggressive. In fact, they’re nasty! Not only has this led to fewer women higher up the system but it’s actually making life very unpleasant for people lower down the system – students and postdocs – especially if they’re gentle people.

Yes, you put it very well. Essentially, it could be argued that you should encourage competitiveness if you have the view that creativity goes hand-in-hand with it. But there doesn’t seem to be much evidence of that. Look at people in the Arts or musicians. I don’t get the impression that many of the best need to be very aggressive. Creativity is not confined to science. My hypothesis is that creativity is fairly well distributed among individuals in a very unpredictable and variable way.

I think that we should have a system where we select for what we want. And what we want is people who make discoveries. In my opinion, science is not like some kind of an army, with a large number of people who make the main steps forward together. You need to have individually creative people who are making breakthroughs – who make things different. But how do you find those people? I don’t think you want to have a situation in which only those who are competitive and tough can get to the top, and those who are reflective and retiring would be cast aside.

I’ve been in research for so long now. I’ve talked to so many young people. I get to know them personally because I work on the bench myself. And I hear all the time that people get put off from continuing in science. Not because they’re unable but because they just don’t like it. Those people are often women but there are also many ‘gentle’ men who don’t like it. What we’re doing is telling people to be tough, to be pushy, to give self-congratulatory talks, to be confident. While those characteristics may be of value in certain walks of life, for example, if you want to be a soldier, they may not be what we want in scientists. I’m not saying it should be forbidden in science but I think there should be more room for people who have more gentle aspirations, who are more social, who understand other people better.

In that article I went over some thorny ground, which is constantly being debated, but it seems obvious to me that men and women are, ON AVERAGE (he emphasises), fundamentally, genetically and psychologically, distinct. Of course, there is a tremendous overlap between the sexes and stereotyping of individuals by their gender is neither objective nor correct. So, I think we need to think again about how we select people. This brings us back to the same old problem – people who get their names on other people’s papers, who annex credit from their students and get rewarded. These people are very often men, although there can be very tough, competitive women scientists as well. But the idea that politically correct people have, that all professions will one day have equal numbers of men and women is not only wrong, it’s silly.

There’s no reason to aspire to that aim. Individuals should do the kind of work they enjoy doing, that they’re good at. And this can lead to different proportions of men and women in the arts and sciences. How the gender numbers work out doesn’t really matter if we can have a society organised in such a way as to take advantage of “the qualities of people”.

Which brings us to the general problem of job security in science, because women who want to have children are heavily penalised by a system that is already very insecure. It’s hard enough for a man to get a job, let alone for a woman who wants to have a baby before she’s too old.

Quite right. The problem goes through society. Women are disadvantaged, both because of the babies that we want them to have and also because of their stronger instinctive tendency to care for people, not just babies. We should find room for these people. Some of them are very good at research. We shouldn’t have this system of measurement. We’re counting papers. We are measuring impact factors. We need to see beyond these silly measures. We should try to ask: Does this person contribute to the department in which she’s working? Has she made some discoveries? Will she be good to have back?

I talk to young scientists and I know about their anxieties – every minute of the day, they’re thinking: How can I get a paper, will I be the first author? Will I be able to get a postdoc with this paper? Is this journal good enough for me to get a postdoc?

And what comes next? You get a postdoc and…?

You get a postdoc for two years and, already after one year, you’re worried about what you’re going to do next. There’s no relaxation. You don’t realise how much this has changed. From my own work, I’ve published some 150 papers. The first 80 papers I published got accepted directly by the journals to which they were sent. Some had to be revised but all of them were accepted. And then there was an abrupt change. Suddenly, you started sending papers to journals because you thought they might get in there and that would be better. In the early days, you didn’t do that. You sent your paper to the journal that you thought was most appropriate for your paper. There was no impact factor.

A funny thing that tended to happen in the first part of my career was, when you found something that you thought was in your opinion more interesting, you would write a very short Letter to Nature, in which you summarised the main thing in a way that somebody else could understand it. Nature was for the general reader in those days. When you got that accepted, then  you would write a more detailed report about what you had done for a more specialised journal. People never do that nowadays. What they do is pack huge amounts of specialised material into a Nature Letter that becomes indigestible and compressed. They’ll get it in there if they’re lucky. It doesn’t matter if people don’t read it or hardly understand it. That’s not the point. The point is to get it in there.

This is what I mean about the deterioration and corruption of publishing practice. It has gone from a situation, which was not too bad, to one that is terrible. I’ve seen all this happen in the nearly 50 years I’ve been in science.

Are you optimistic for the next few years?

Not really. A friend told me that these pendulums always swing; that it will swing back one day, that there’ll be a change and there will be a move away from measurement. But, when you look at the way business management techniques have moved into public research agencies like the MRC, one just despairs. There is an enormous increase in bureaucracy – form filling, targeting, assessment, evaluations. This has gone right through society, like the Black Death! I’m not optimistic.

Science is such a wonderful thing to be doing. There are people who understand that. They will go on doing it and will see us beyond the short term measures we’re now subject to, I hope. But they are suffering due to the insecurity.
Many of them have trained for years to become research scientists. Some are very good, yet they’re looking down from the edge into an abyss. Some will succeed but most will fail. As for those who do succeed, I’m not sure that they will have such a good life – writing grants the whole time, sitting at the top of the pyramid.

Overall, what are likely to be the consequences if it continues like this?

The real quality and communicability of our work has deteriorated. The people who fund us will finally discover that. But I think that there is still great work going on in science. There’s a lot of privatisation of scientific research, some of which is more targeted and can be very useful, for example, in biotechnology. But the intellectual heart of research is sick because its main purpose is discovery. Illuminating our understanding of nature, that’s what it’s about. It’s not about producing a paper that nobody wants to read or understand. If we lose sight of that, then we won’t find out things so easily. We may stumble across things occasionally, as we’ve always done. But many young people just don’t see what science is for. Most of them are trying to get a paper.

We have to be ambitious. We have to find something that is worth telling other people about.

Interview: Jeremy Garwood

Last Changes: 08.29.2017