"I am Happy to Find My Own Errors"

A conversation with John Ioannidis, Stanford School of Medicine, California, USA
Interview: Florian Fisch, Labtimes 04/2014




Photos(2): © SNF/Severin Nowacki

John Ioannidis is the epitome of conscience for scientific endeavour. For years, the medical doctor has addressed the flaws and failures in science. So, one could be forgiven for expecting an embittered activist – but far from it!

If you want to know why most published research results are false, John Ioannidis is the right man to talk to. Ioannidis is professor of Health Research and Policy at Stanford School of Medicine and director of the Stanford Prevention Research Center. Born in 1965 in New York City, he was raised in Athens, Greece and studied medicine at the University of Athens Medical School, graduating in 1990. His research career then led him to Harvard Medical School, Johns Hopkins University School of Medicine and Tufts University Medical School, before he returned to Greece in 1999, chairing the Department of Hygiene and Epidemiology at the University of Ioannina until 2010.

When your Lab Times reporter learned that Ioannidis was to give a talk on the subject of "Funding research: Impact, Conformity and Reproducibility" at the Swiss National Science Foundation (SNF), he immediately knew he had to go along. Seeing how this good-humoured professor in his late 40s with his mischievous look tells the intelligentsia of the SNF that "the citation profile of academic technocrats in governments is dismal" had a highly refreshing effect.

Ioannidis addresses the problems of science upfront: be it unfounded claims of significance, empty promises of innovation, funding of conformers rather than innovators, false positives and exaggerations leading to irreproducibility of studies or biases influencing the statistical outcome, he lays everything that hampers scientific progress on the line. His conclusion: "Funding practices can influence the legacy of the scientific endeavour".

Luckily, Ioannidis spontaneously agreed to give an interview to the unprepared but opportunist Lab Times reporter. It was a truly fascinating experience.

Lab Times: You are studying biases of scientists. How do you leave out your own?

Ioannidis: [laughs] I am sure I have tons of biases in every single project that I do. Much of the time, the stimulation I get to probe into some of these problems are errors that I have made myself earlier on. We are all part of the same scientific process. What we do is not unrelated to science; it's part of our everyday scientific experience. There are two ways to think about biases: one is to try to forget about them and the other is to try to be sensitised about them. I prefer the latter and to amend them rather than hiding things under the carpet.

So, you're not disappointed when you discover your own biases?

Ioannidis: No, I am very happy. There are two types of errors. There are the ones that are recognisable, which means you can correct them in the future. This is great news. However, there are others that you cannot even recognise. This is bad news because you continue repeating the same error again and again.

Ideally, you enjoy finding errors in your own research. But in the end it tells you that something you have done before is wrong. It diminishes the value of your previous research.

That is not easy to take, is it?

Ioannidis: Why does it diminish the value of your previous research? If it was done with the best intentions and you thought as well as you could about it, then making a mistake is perfectly fine. Science is never perfect. The ideal study with the perfect results is even incongruent with science, which is an effort to improve, correct and come closer to more accurate estimates of reality. If you take a broad perspective, disappointment is part of the process.

Nevertheless, you draw quite a bleak picture of science. Discovering that most published research findings are false is clearly shocking. How do you evaluate science as an activity overall?

Ioannidis: I don't see the bleak message. Science is the best thing that has happened to humanity and it's the noblest endeavour that I can think of. The fact that it has this potential for falsification makes it so important. Without this potential it would be dogma, politics or religion but certainly not science. Exactly the fact that there is such great effort invested, that it is so difficult to do and that it has evolutionary improvement over time is what really gives it value.

This is difficult to see for outsiders...

Ioannidis: If we try to convey a picture of science as being related to impressive discoveries, successful all the time, bringing major progress, getting rid of cancer and reaching out to the galaxies, people will get the sense that everything is so easy and that science is omnipotent. This is very unrealistic. Despite spending years and years, sometimes nothing emerges. In reality, the effort that led to nothing and the one that led to the Nobel Prize belong to the same family. They all share the same glory and satisfaction.

But science communication should be honest. If you believe that the science is the best that has happened to humanity, you also need to ensure that the public keeps funding the science. Often, they don't share your enthusiasm but want to see tangible results instead.

Ioannidis: I agree. This is tricky. There are again two paths one could try to follow. One is to try to promise that research will deliver. You give me money and I will give you back more money. I have seen that thinking being adopted by leading scientists in big scientific agencies, who are under a lot of pressure trying to justify their activity to politicians, the public or the taxpayers. And it is definitely true that the entire scientific enterprise is cost-effective in the long run. But I am a little worried that when we enter this type of justification for science, we will run into unethical competition. There will be many other endeavours that will make the case with spurious data arguing it is better to invest in what they do. Take sports, for example. They make even more money, get more visibility and have a bigger impact in the media. Scientists are not good at this game and blanket promising is incongruent with scientific thinking. We want to be cautious and critical. We want to avoid being misled and fooled. We would be abandoning what we are good at to fight a different discipline on a different type of terrain. This is problematic.

What can we do instead?

Ioannidis: We should take the most honest line and make a case that the public and government should continue to fund research generously because we really have no other way of understanding what's happening in and around us. We have to make a case that this is very difficult. We cannot promise the cure to cancer but we should say, "This is very difficult. Thousands of the brightest people have been working on it for years. We are making progress and to be honest, we don't know where the next big progress is going to happen." Otherwise the public will notice sooner or later that the cure for cancer is not going to be found in two years.

The problems of science, like the current reproducibility crisis, are deeply engrained into the scientific culture and extremely hard to change. While everybody agrees with you, in principle, not many agree on how to solve it. What's your approach?

Ioannidis: I wouldn't be so pessimistic about it. Many scientific fields do find ways to solve their problems with efficiency and reproducibility. There are different stages of tackling the problem: first is realising that there is a problem, then comes identifying how big it is and how it manifests itself, in the end it has to be worked out what causes it and how we get rid of it. Things can be done and many fields have taken steps in the right direction. Sometimes the solution is more replication. In other cases replication is taken as a condition sine qua non for publication for some type of results.

Lately, it seems that we are far away from replication being a condition sine qua non for publication?

Ioannidis: Take genetics, for example. If a genetic association study hasn't been replicated extensively by multiple groups, papers will not be accepted by the major journals. They have very stringent criteria for significance. Many other fields are adopting more transparent standards. Measures, protocols and reporting are standardised. Dozens of study reporting guidelines have been widely adopted, even within a few years. Many journals are adopting policies of transparency, data sharing and openness. Now, there is routine registration of protocols for clinical trials that was unheard of ten years ago. Registration is indeed a sine qua non for publication of clinical trials in any major journal.

Ok, the guidelines are there. In reality, unregistered trials still get published. Even the International Committee of Medical Journal Editors (ICMJE) admits that the problem is yet to be solved.

Ioannidis: It can always be improved. The top journals have adopted the principle and are following it. Of course, there are hundreds of journals ready to publish a trial without registration but they have much less influence. I agree that registration is not the end of the story. Even when you have a registered protocol, maybe the protocol is incomplete, the analysis will be distorted or the results will be non-replicable in other ways. It still is a major step forward because in the past we did not even know about the existence of certain trials. Now, if we see that some get published, we can ask how many we haven't seen yet. If we could manage to get a broader picture of what is happening, maybe some of the good practices could be adopted in other fields to achieve multiplicative impact.

What do you think of publishing the methods instead of the results?

Ioannidis: It would definitely make sense to put more emphasis on methods. Currently, many journals publish reduced versions of methods in fine print that make it very difficult to understand what exactly has happened. It would be useful to improve transparency of methods for others to be able to understand what exactly has happened in each step. Many of the reporting standards are taking care of that. They are explicitly asking for information on the major aspects of the methods for each type of study design. But results are still very important. I would argue that results should be in the public domain and people should be able to see them.

But aren't the methods key when judging the value of a publication? The results could be released to some database after a project has been finished.

Ioannidis: The methods are certainly more important than the results. Some journals have said that they are willing to accept submission of protocols. People may submit their protocols and get pre-decisions on the publication based on the protocol alone. The journals that have offered to do this, found themselves in a difficult position. Lancet, for example, was one of the first to adopt this as a practice but realised that some results were eventually not interesting, considering their impact factor of 35.

For the 99 percent of journals just wanting to publish good science, this is a very workable solution. However, it's still possible for the methods to seem okay whereas the conduct of the study may not be so great. So, I want to be a little bit cautious. It also depends on the type of research. There is research where the methods can be explicitly anticipated, like a computer code that can be run. Other research proceeds by exploratory iterations that cannot be fully anticipated. Half way down the road, you have to improvise and change direction. This does not necessarily mean that this is bad research. What matters is to be transparent about what has happened and not pretend that this convoluted path was pre-specified in one's mind right from the beginning.

You said that funding agencies should accept that most effects in biology are small effects. That sounds extremely honest. Isn't that a problem for the whole of biology. If effects are small, are they really worth studying?

Ioannidis: If it turns out that nature is full of small effects, yes certainly.

Are the small discoveries still meaningful for our lives?

Ioannidis: Information is meaningful, no matter how small the effects are. As long as it is trustworthy information, that's what it is. However, applying the information to change our lives is different. Most of the time we shouldn't make any changes to our lives just because of some new discovery of some small effect. I don't see what should be bad about this. It would be horrible if there were a zillion things to have to change in our behaviour, or if an average healthy person had to take one million different pills to improve their health.

When I read about research results I often think: So what?

Ioannidis: Very nice question.

Is it still worth pursuing this science?

Ioannidis: Science is worth pursuing irrespective of whether the effects are big or small. I get the impression that most effects are small. Maybe that even makes sense in biology. If biology were composed of huge effects, maybe we would be monsters, very uneven beings. We have very concrete equilibria and soft differences in evolution. Documenting these is perfectly fine. If this is how it looks, then we have to be honest, not do anything but sit back and say: interesting!

If you had the chance to redesign how the ERC decides about who gets funded, how would it look?

Ioannidis: I think the ERC is doing a great job, the way it is now. Clearly, compared to contesters, mostly national funding agencies, it does much, much better. It has an outlook towards selecting excellence and innovation and people who are the best and also trying to have the best possible panel to appraise that. My personal bias is that I would like to obtain some experimental evidence on research funding processes. I feel uneasy with the fact that we're funding science without having any science about how to do this. Isn't that a paradox? We want science about anything around ourselves but when it comes to appraising science we don't want scientific methods or experiments. I would only suggest that leading funding agencies should consider experimental studies comparing different modes of appraisal.

Isn't that problematic? We can easily conduct science on things we can measure but these are not the important factors. What we really want to fund is qualitatively good research. Citation figures only give us a proxy of quality, while the judgement of quality is entirely subjective.

Ioannidis: I am not sure whether I would agree with that. For any scientific question the issue is to have rigorous outcomes. For example: Do we have measures for pain that are good enough? We can just ask the patients how much pain they feel. We can also ask physicians about how much patients are screaming. Is that objective? Maybe we should measure the nerve impulses on all pain fibres. There is always a surrogacy issue: we are measuring something that may not be the most concrete or complete outcome of what we want to measure. But I would argue that we have outcomes that we can measure. Citation impact, quality, reproducibility, sharing and translation are things we can measure. I mean papers and citations clearly are measurable. You can have age-adjusted indices or co-authorship-adjusted indices and many other fancy metrics. Whether someone is publishing data or keeping it in the file drawer, registrations of trials and translations to application are all measurable. You have to wait some years or measure a surrogate outcome earlier on. I am not saying that these measures capture everything. But unless we start thinking about this and running the studies, we will not be able to use the best possible metrics to improve upon. It's an iterative learning process.

That all sounds nice. But how do you measure the quality factor?

Ioannidis: There are ways to appraise quality. We should ask: what are the hallmarks of a good study. No one would contest that randomisation and blinded assessments are crucial in animal experiments – yet, most animal studies still ignore them. I would feel better, if one could really think about the rules rather than trust that an expert panel, which often includes scientists with minimal impact or failed quality standards in their own work, selects the best people. It doesn't sound very scientific to me.

Some people say that there are too many bad scientists and that we would do better by cutting their number by half.

Ioannidis: I've heard that, too. This is a dangerous intervention. Depending on who decides and where we cut, it can be a real mess. Science is growing without a masterplan but some rational interventions do happen as well. At least, this should be subjected to studies. What happens if you try to strengthen a more credible core? We don't know. But it doesn't sound right to me to just cut the number of scientists in half by some arbitrary dictatorial selection. [laughs]

You ask for the right incentives. Where would you put them?

Ioannidis: Incentives appear on all levels of scientific and academic coinage: at the level of publication, funding and promotion. You only need to give the right ones. If you ask for statistically significant results for publication you will get statistically significant results. If you ask for reproducible research to get funding, people will generate reproducible research. If scientists get promoted because they share their data, they will share their data. [laughs] They would be making phone calls at midnight saying, "I want to share my data."

It seems to me that these changes don't happen because people setting the incentives are part of the same crowd as the people following them.

Ioannidis: Well, the crowd is made of people. If the scientific community agrees that these criteria are important, then they should reward the scientists following them.

Your research often contains heavy mathematics that is not easy to understand for average biologists like me. Is there a way to make it more accessible?

Ioannidis: I think that there could be simplified versions. I enjoy both mathematical reasoning in terms of theory and empirical evaluation of hypotheses. Some of the messages are easier to convey to a wider public than others. I have been pleasantly surprised by the level of understanding that these issues have achieved. To me, many seemed esoteric without the potential to reach a wider sphere. But apparently, there is a lot of interest both within science and in the rest of society.

Do you get enough PhD students to work with you?

Ioannidis: I have lots of brilliant people who come to me physically and electronically and they want to collaborate to work on some of these ideas. There are thousands of people around the world that I feel are part of my scientific team. It's a virtual lab scattered around the world that is different from that of many scientists who know that their lab is on the third floor, has four benches and four PhD students and assistants. I am really humbled by the interest of the number of people who have approached me to brainstorm on different projects. It's an opportunity to learn from them, as many of them come from fields that I am not familiar with. They have different practices and problems and they have thought differently about overcoming them.

Have your findings changed the way you do your own research?

Ioannidis: Absolutely, yes. There have been striking changes in almost everything that I do. In my career I usually have had no clue where I would be in five years and what I would be doing then and how. One has to be receptive and responsive to new ideas and possibilities. This is one reason why I feel uneasy about modes of appraisal where you ask scientists to tell you exactly what they will deliver in five years from now. Some types of projects may be amenable to this, for example, when you have a randomised clinical trial. But many other fields are so live and vibrant, so many interesting ideas arise that you don't want to abandon them, especially ideas that bridge different disciplines. If you could combine plant science with astrophysics, that would be wonderful. The question is how to do it.





Last Changed: 03.07.2014