The Science of Science Advice – “Interpretive Scientific Skills” for Politicians (7)

(January 4th, 2016) Many government decisions have a scientific element. But who decides what kind of scientific advice will be used? How it is presented to the politicians and citizens they represent? Jeremy Garwood looks at the rise of the ‘Science of Science Advice’.

Scientists often complain about the apparent ignorance that politicians and administrators display when it comes to science. This inability to understand the nature of science contributes to misinterpretations of evidence, exposes governments to selective lobbying by commercial interests (among others) and results in poor public policy decisions.

Frustrated by this widespread scientific ignorance, a list of 20 key concepts was published in Nature and the Guardian. The authors, zoologist William Sutherland, mathematician David Spiegelhalter (both at Cambridge University) and ecologist Mark Burgman (at the University of Melbourne) suggest these 20 concepts should be part of the education of civil servants, politicians, policy advisers and journalists - and indeed “anyone else who may have to interact with science or scientists.”

“One suggestion to improve matters is to encourage more scientists to get involved in politics,” they say, but it is unrealistic to expect substantially increased political involvement from scientists. Another proposal is to “expand the role of chief scientific advisers, increasing their number, availability and participation in political processes” (see part 1 of this series for a comparison of models of science advice). However the authors claim that neither approach effectively deals with the “core problem of scientific ignorance among many who vote in parliaments.”

Furthermore, trying to teach science to politicians would also appear to be a lost cause – “Which busy politician has sufficient time?” In practice, policy-makers almost never read scientific papers or books themselves. It is interpreted for them by advisers or external advocates.

Instead, the immediate priority must be to improve policy-makers’ understanding of “the imperfect nature of science”. To do this, they need to learn how to “intelligently interrogate experts and advisers, and to understand the quality, limitations and biases of evidence.” These so-called “interpretive scientific skills” are far more accessible than all the learning required to understand the fundamentals of science itself. In this respect, they can form part of the “broad skill set” of most politicians.

To help them acquire these interpretive scientific skills, Sutherland, Spiegelhalter and Burgman have provided a “simple list of ideas” that can help decision-makers to be clearer about how evidence can contribute to a decision, and “potentially to avoid undue influence by those with vested interests.” Although they are aware that other scientists might have drawn up slightly different lists, a wider understanding of these 20 concepts by society would already be “a marked step forward.”

Here are their 20 Tips for Interpreting Scientific Claims:

1. Differences and chance cause variation
2. No measurement is exact
3. Bias is rife
4. Bigger is usually better for sample size
5. Correlation does not imply causation
6. Regression to the mean can mislead
7. Extrapolating beyond the data is risky
8. Beware the base-rate fallacy
9. Controls are important
10. Randomisation avoids bias
11. Seek replication, not pseudoreplication
12. Scientists are human
13. Significance is significant
14. Separate no effect from non-significance
15. Effect size matters
16. Study relevance limits generalisations
17. Feelings influence risk perception
18. Dependencies change the risks
19. Data can be dredged or cherry picked
20. Extreme measurements may mislead

In broad terms, these concepts are concerned with how one approaches scientific research - what is being claimed and how good is the evidence provided to support those claims? They call for an understanding of variability and that there is constant risk of personal bias, while addressing questions of experimental error and statistical interpretation. These points should be evident to most research scientists, but they are rarely stated together as general principles of critical reasoning. The next part of this series looks in more detail at the authors’ explanations for their choices.

Jeremy Garwood

Photo: Fotolia/Gajus

Last Changes: 05.02.2016