UK’s Boast of Enhanced Research Excellence ‘Lacks Credibility’ (Part 1 of 2)

(November 16th, 2015) The UK’s periodic evaluation of its university research has triumphantly reported a doubling of its top-class research results during the previous 6 years, but a reanalysis of the data has found a much lower result.





In 2014, the UK’s ritual quality assessment of its university research – the Research Excellence Framework (REF 2014) - found that it had produced another ‘excellent’ return on its research investments. Compared to the rest of the world, the UK had a higher proportion of outstanding ‘world-leading’ research. But this was also what the UK had claimed after its previous Research Assessment Exercise in 2008 (RAE 2008). However, some sceptical researchers have questioned the basis for these claims. Most notably, during the 6 year period between these two reviews, the UK finds that it has doubled its internationally best and most ‘excellent’ research in life sciences and medicine - an increase of more than 100%!

By any standards, this is a remarkable claim – to double the amount of research that ranks with the best that anyone on the planet has produced in just 6 years. How did the UK achieve this? In their research paper, Steven Wooding et al. have investigated the basis for this finding and suggest that the UK has simply lowered its standard for ‘internationally excellent’ research.

Their study further questions the basis of the UK’s research evaluation system, first introduced in 1986. Originally called the Research Assessment Exercise (RAE) it was triumphantly re-branded as the Research Excellence Framework (REF) in 2014. Held every few years (1989, 1992, 1997, 2001, 2008), these are enormous administrative reviewing processes that must assess the relative ‘quality’ of the research performed by more than 50,000 UK academics. Universities and their researchers have a strong incentive to present their best research outputs for evaluation – a lot of money for future research is distributed according to the results of these evaluations. If your university department gets a very good overall note, it can receive a bigger proportion of available funds. (For a more detailed discussion of the shifting and hard-to-define nature of ‘excellence’ in research and higher education, see the Lab Times’ essay ‘Excellence or Non-Sense - What is ‘real’ Excellence?’ (LT 05/2015 p. 28-31)).

Inflated claims

The results of the 2008 Research Assessment Exercise (RAE 2008) boasted about the UK’s remarkably 'excellent' research. The official ‘key findings’ of the RAE 2008 were triumphant: 54% of the research is either 'world-leading' (17% in 4-star) or 'internationally excellent' (37% in 3-star). David Eastwood, Chief Executive of the UK’s main university funding agency (the HEFCE) insisted that RAE 2008 had been “a detailed, thorough and robust assessment of research quality.” By producing quality profiles for each submission (through a peer-review process) rather than single-point ratings (using bibliometrics), he claimed the assessment panels had been able to exercise “finer degrees of judgement.”

He further noted that “although we cannot make a direct comparison with the previous exercise carried out in 2001 (i.e. RAE 2001), we can be confident that the results are consistent with other benchmarks indicating that the UK holds second place globally to the US in significant subject fields.” Although the prime aim of the UK research scores is to generate rankings of UK universities for funding purposes, let’s not forget the world rankings – the USA at number 1, the UK a battling number 2! “The outcome shows more clearly than ever that there is excellent research to be found across the higher education sector” said Eastwood, no doubt aware that the international reputation of UK universities is heavily promoted to attract fee-paying overseas students.

Well, having already proclaimed the excellence of their performance in RAE 2008, how many researchers would have predicted the ‘excellent’ improvement in ‘excellence’ announced in REF 2014? Yet the executive summary was unequivocal: “The results of the 2014 REF show that the quality of submitted research outputs has improved significantly since the 2008 RAE, consistent with independent evidence about the performance of the UK research base”.

Even more “excellent”

Nevertheless, the improvement in research ‘excellence’ reported by REF 2014 took some commentators by surprise. Had the UK’s university research really beome so much better between 2008 and 2014?

The Council for the Defence of British Universities (CDBU) are convinced that current UK higher education policy “will soon do permanent and irreversible damage to a great university system”. Commenting on the latest REF results, they suggested that any critics of REF 2014 had either been dismissed as bitter “losers” (if they came from universities that lost out in the REF scores) or as “traitors” (if they came from “excellent” institutions). As such, it might be argued that the study by Wooding et al. was initiated by academics who could be labelled ‘traitors for doubting British success’ - both Shitij Kapur and Jonathan Grant come from King’s College London which was arguably the biggest winner in the 2014 research excellence framework.

Another counting method – Bibliometrics vs Peer-Review
 
However, in their study Wooding et al. say they only wanted to understand how reliable and meaningful all of these expensive and time-consuming research evaluation exercises really are. To do this, they decided to test the claim that the most excellent UK research had doubled between 2008 and 2014 by re-evaluating the data against international ‘independent’ measures of quality. In effect, they compared a bibliometric, computer-based evaluation with the ‘subjective’ human peer-review assessment. In this respect, they contrast the so-called ‘wisdom of the crowd’ approach of bibliometrics - which assumes citations equate to quality - with the peer-review process that asks particular individuals to make particular judgements on research quality, often against particular criteria. Wooding et al. acknowledge that the use of bibliometrics to assess research quality is subject to criticism, but point out that peer-review also has a long history of criticism. In fact, it is often viewed as ‘a system full of problems but the least worst we have’. They stress that “bibliometrics are only one measure of scientific quality, and do not replace peer-review” but insist that if the two measures give widely differing results “it deserves comment and further attention.”

Their experimental approach was fairly straightforward. First, they obtained details of the research “outputs” submitted to both the RAE 2008 and the REF 2014 from their public websites. They then reanalysed them using various bibliometric indices to find an alternative measure of their international performance over the same time frame. In particular, they concentrated on research articles assessed by Panel A (which covered the life sciences, including medical and allied health professions research). In REF 2014, this panel’s peer-reviewers had reported a doubling (103% increase) in its top-class - “world leading” or so-called “4-star” - research outputs between 2008 and 2014. These had risen from 11.8% to nearly 23.9% of total submissions.

Panel A received 56,639 research articles in RAE 2008 and 50,044 articles in REF 2014. Wooding et al. now reassessed all of these RAE/REF articles using bibliometric analysis of the ‘Web of Science’ database at the Centre for Science and Technology Studies in Leiden. This database contains information on some 42 million articles from over 18,000 journals and keeps track of more than 555 million citations. To compare like-with-like, the citation to a particular article was compared to all other articles in the same field and from the same year of publication, allowing the determination of the worldwide “percentile” of that article. Self-citations were excluded.

Different results
 
In their bibliometric analysis, Wooding et al. found that both the RAE 2008 and REF 2014 exceeded the worldwide average bibliometric score and that research submitted to REF 2014 scored higher than RAE 2008 at all top percentile levels. However, the increase was four times less than that found by the peer-review panel – instead of a doubling (103%), it showed a 25% improved performance for articles rated in the top 10%. Better, but not quite so ‘excellent’.

The second part of this report looks at why there was such a huge difference between REF’s peer-review result and the bibliometric evaluation, and discusses how this casts yet more doubt on the value of such time-consuming administrative exercises and their far-reaching (and many would argue, negative) effects on the research enterprise.

Jeremy Garwood

Picture. Fotolia/studiostoks

 




Last Changes: 12.18.2015



Information 4


Information 5


Information 6