The Science of Science Advice – A World of Variability and Potential Error (8)

(January 7th, 2016) Many government decisions have a scientific element. But who decides what kind of scientific advice will be used? How it is presented to the politicians and citizens they represent? Jeremy Garwood looks at the rise of the ‘Science of Science Advice’.





The 20 Tips for Interpreting Scientific Claims (see part 7) begin with the observation that the real world varies unpredictably - “Science is mostly about discovering what causes the patterns we see. Why is it hotter this decade than last? Why are there more birds in some areas than others? There are many explanations to such trends, so the main challenge of research is teasing apart the importance of the process of interest from the innumerable other sources of variation.”

But as the authors, Sutherland, Spiegelhalter and Burgman explain, this is by no means easy. There are frequently limitations to the accuracy and reliability of measurements – “practically all measurements have some error.” In some cases, the measurement error might be large compared with real differences. Thus, if you are told that the economy grew by 0.13% last month, there is a moderate chance that it may actually have shrunk. Moreover, experimental design or measuring devices may produce “atypical results.”

And because studies that report “statistically significant” results are more likely to be written up and published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions. This is why it’s usually better to accumulate more evidence in order to get a better idea of what is happening.

There are also common mistakes that can be avoided. For example, being aware that correlation does not imply causation - “It is tempting to assume that one pattern causes another. However, the correlation might be coincidental, or it might be a result of both patterns being caused by a third factor – a 'confounding' or 'lurking' variable. For example, ecologists at one time believed that poisonous algae were killing fish in estuaries; it turned out that the algae grew where fish died. The algae did not cause the deaths.”

Extreme patterns in data are also likely to be “anomalies attributable to chance or error.” And patterns that are found within a given range do not necessarily apply outside that range. “Thus, it is very difficult to predict the response of ecological systems to climate change, when the rate of change is faster than has been experienced in the evolutionary history of existing species, and when the weather extremes may be entirely new.”

Beware the “base-rate fallacy” because the ability of an imperfect test to identify a condition depends upon the likelihood of that condition occurring (i.e. the base rate). For example, screening for a disease using a test with a high level of accuracy will nevertheless detect more false positives than real ones if the population base-rate is very low for the disease – if only 1 in 10,000 in the population have the disease and the test is “99% accurate”, then randomly screening 10,000 people will still give 1% false positives (i.e. 100 people) in addition to the one true positive.

When designing experiments, controls are really important because “without a control, it is difficult to determine whether a given treatment really had an effect.” Similarly,   randomisation avoids bias, and experiments should, wherever possible, allocate individuals or groups to interventions randomly. 
 
The use and abuse of statistics

They also point to confusion about the statistical significance of results. Significance is significant – “Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20)”. But they also caution that the lack of a statistically significant result (say a P-value > 0.05) does not mean that there was no underlying effect: it means that no effect was detected - “A small study may not have the power to detect a real difference.”

However, as with the other points on this list, one could easily say more. For example, as David Colquhoun recently discussed in Lab Times, many scientists overstate the significance of their results based on P-values < 0.05. He notes that “if you use P=0.05 to suggest that you have made a discovery you will be wrong at least 30% of the time” and that if your experiments are “underpowered, you will be wrong most of the time”. To really keep the false discovery rate below 5%, “you need to use a three-sigma rule”, i.e. insist on P < 0.001.

This may seem obvious, but there are worries that politicians overestimate the impartial objectivity of researchers – “Scientists have a vested interest in promoting their work, often for status and further research funding, although sometimes for direct financial gain. This can lead to selective reporting of results and occasionally, exaggeration. Peer review is not infallible: journal editors might favour positive findings and newsworthiness. Multiple, independent sources of evidence and replication are much more convincing.”

Problems of better science advice and political ignorance

This is why politicians with “a healthy scepticism of scientific advocates might simply prefer to arm themselves with this critical set of knowledge” i.e. their 20-point list. Burgman told the Guardian that politicians “broadly speaking” struggle to critically examine scientific advice. “Politicians are smart, strategic people, they just aren’t sufficiently cautious of scientific advice. They are either a little intimidated by it, or they ignore it. There’s a frustrating gap, so policy makers need skills to enable them to listen to the science and probe it for reliability. Some scientific advice is accepted unquestionably but then other advice - because of the broader political landscape - is ignored completely. Science is either considered august and reputable or something to be dismissed because it’s done by a bunch of boffns. We need a middle ground where politicians can make an analysis and then decide what’s best.”

Shortly after the publication of the scientists’ list of 20 things about science that they said policy-makers should know, a top policy-maker duly responded with a list of the “Top 20 things scientists need to know about policy-making”, discussed in the next part of this series.

Jeremy Garwood

Photo: Fotolia/Gajus




Last Changes: 05.02.2016



Information 4


Information 5


Information 6