Bang for your buck: The need for impact assessment
Text by blank:a Editorial Department

When the UK rolled out its performance-based research funding system in 1986, the country was facing economic turbulence, and growing constraints on public funding led to policies that called for greater accountability. The funding system, now called REF, did just that. It introduced accountability among universities by tying funding to how universities performed and also informed the strategic allocation of limited resources.  

In 2014, impact was introduced as a metric with the aim of putting in place yardsticks to measure the impact of research outside academia. Before this, academia in the UK worked under the assumption that basic research would eventually benefit various aspects of society and scientists were never accountable for showing evidence to show impact. With the introduction of the impact agenda and with funding depending more on research impact rather than research output, academia was compelled to showcase the value research has to society. It was widely believed that tying funding (worth around £2 billion per year, according to REFto the impact the research showed was a strong performance incentive for universities and researchers to contribute to the development of a world-class research base in the higher education sector and the only way to improve research excellence in the UK.  

The REF defines impact as “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (re.ukri.org). The phrase “beyond academia” is critical in understanding the thrust of REF. This is distinct from academic impact, or the impact factor of journals, which are not relevant to impact in REF terms. The main objectives of the REF are 

 

  1. To inform the selective allocation of funding for research
  2. To provide accountability for public investment in research and produce evidence of the benefits of this investment
  3. To provide benchmarking information and establish reputational yardsticks, for use in the higher education sector and for public information. (re.ukri.org)

Qualitative versus quantitative 

The REF was a pioneering method for evaluating institutional performance based on performance metrics and expert evaluation. Hong Kong, Australia, Canada, and some Scandinavian countries have adopted similar frameworks for the evaluation of university performance, but what sets REF apart is the high weightage given to the impact the research has had outside academia (20% in REF 2014 and 25% in REF 2021) and peer evaluation being a key method. 

The REF’s use of subjective evaluation is consistent with the view that expert assessment should supplement quantitative indicators (Hicks et al., 2015). However, this approach has not found many takers. Denmark, Finland, Sweden, and Norway have stuck to quantitative indicators to decide funding allocation, mainly for reasons of cost, but also because they do not see the merit in tying funding with peer-based research evaluation. Sweden and the Netherlands, for example, are allowing its universities to run a research assessment exercise independently and with the help of internal panels of experts, with the assessment not having any funding implications.  

 

A global movement 

There is a global trend towards competitive funding with countries using performance-based metrics to govern institutional funding. Hong Kong and Australia have taken the UK’s lead and introduced their own research impact frameworks to govern fund allocation to universities (Australia’s Engagement and Impact Assessment was started as recently as 2018. You can find impact case studies here.) These evaluations are taking on a global scale and are helping establish a bilateral relationship between universities and society. Universities are now investing in previously unheard of roles like “impact officer” or entire teams dedicated to curating evidence on research impact. Universities are also expected to divert financial resources to impact-generating research and review their communication with society. This has led to the development of impact support into a professional space, with protocols, training programs, and conferences and workshops being dedicated to understanding the meaning and importance of impact.  

A high REF rank can also impact the reputation of the university and consequently of research groups or individuals who have contributed strongly to their institution’s REF score. In a country like China, which is aggressively pushing its universities to pursue the agenda of becoming highly reputed institutions globally, a framework like REF may be just what it needs. 

 

The way forward 

The idea of universities proving research impact, and more specifically the UK’s version of it, has come in for some criticism. Critics have pointed out that the pressure to prove impact beyond academia has taken the focus away from the research itself and narrows the choices for research. Others have taken issue to funding being dependant on something as subjective as “importance” or “impact” of the research. Academics generally accept that the measurement of impact varies greatly between disciplines, but can the impact of a medical breakthrough that could save lives ever compare with a novel finding in the arts? A University of Leicester piece on impact nicely sums up the conundrum: How to evaluate the cultural impact of historical research, the societal impact of political research, or the economic impact of medical researchis one of the biggest challenges associated with the impact agenda (University of Leicester 2020). It’ll be interesting to see how researchers, universities, funding bodies, and think tanks behind these frameworks can work together to address these criticisms.  

The consensus outside academia seems to be that scientific endeavours ought to reap benefits for society. The REF and similar frameworks ensure that universities at least are aligned with this view. With financial support for certain sectors of academia and education being curtailedin 2015, Japan came in for criticism for slashing funding for the humanities and social sciences (The Guardian 2015)more countries will be compelled to adopt similar frameworks to inform the allocation of resources. What remains to be seen is whether these countries implement a framework closely modelled on the REF or if they will adapt a modified version that takes into consideration the country’s socio-political and economic landscape. The UK spent £246 million on REF 2014—a number that most low- and middle-income economies may balk at. 

In the context of the UK, we also need to watch how Brexit affects the structure and implementation of the REF—Will a financial drought spell doom for university funding or will universities and funding bodies come together to find an innovative way out? 

 

A quick guide to measuring impact under REF 

 

UK universities vying for funding need to send in their submissions under one of 34 subjects or units of assessment (UOA) spanning disciplines like economy, society, culture, policy, health, environment, and quality of lifeThese submissions typically contain details on the quality of the journal articles, books, monographs, and other research outputs they have producedcase studies detailing the impact of their research on the outside world; and auditable evidence in the form of testimonials, survey results, financial data or any of a wide variety of ways of backing up the statements made in the caseOnly a subset of a university’s research is expected to contribute to a case study: a small university might be required to submit less than 20, while a large one might have to submit over 100.   

The submissions are assessed by an expert sub-panel for each UOA, working under the guidance of four main panels, which oversee the assessment to ensure the assessment criteria and standards are consistently applied.  

 

Source: www.ref.ac.uk/2014/ 

 

Drafting the case study 

 REF submissions comprise two sections: Section A seeks details like name of institution, staff involved, and period when impact occurred. Section B, which forms the meat of the case study, includes the following: 

  1. A summary of the research impact (100 words)
  2. A brief description of the base research (500 words)
  3. Related literature (6 citations)
  4. Details of the impact (750 words)
  5. Evidence to corroborate the impact—these can be either published sources or statements from organizations/individuals (10 sources)

Universities are required to provide what the REF calls “additional contextual data,” which is used in post-assessment evaluations and is to be provided separately from the five-page limit. 

The template is standardized—staff members involved, for example, need to be listed as opposed to their contribution being described in free text—and the REF office has prepared detailed guidelines on how the case study and accompanying evidence needs to be provided, thus making the submission process simple yet detailed and watertight. 

 

Scoring 

Identifying the right case studies is essential, as it affects the score a university may receive. Jo Lakey, REF Delivery Director at King’s College London, sheds some light on the process and factors to be considered when narrowing down on case studies:  

There are some disciplines, like physics and math, where it’s more difficult to show evidence of impact outside academia. Impact accounts for 25% of the assessment score. If you have two case studies, they’ll be worth 12.5% each. If you have 10 case studies, they’ll be worth 2.5% each. During the assessment phase, they’ll be given anywhere between four stars and one star or even be unclassified, and the scores will be put together to create the profile for the university.  

Let’s say two case studies have been submitted under Unit of Assessment A and ten case studies under Unit of Assessment B. Both case studies under Unit of Assessment A get four stars. That would make the impact score 100% four stars. Under Unit of Assessment B, five of the ten case studies get three stars (50% three stars) and the other five get two stars (50% two stars). It’s safe to conclude that Unit of Assessment A with two case studies has done much better than Unit of Assessment B with ten case studies. The larger the unit of assessment, the more cases you have to submit and the higher are the chances of you getting a range of scores rather than 100% four stars. That’s why you need to invest a lot of time preparing, making sure you’re submitting your best possible case studies. 

There’s no transparency around the score, so it’s difficult to identify the high-scoring case studies. The score is mentioned in your profile. Ithe profile has 100% four stars, it’s clear that all the submitted case studies got four stars, but if the profile has 25% four stars, 50% three stars, and 25% two stars, you don’t know which case study got which score. 

 

References: 

  1. “REF Impact,” Research England, accessed October 27, 2019, https://re.ukri.org/research/ref-impact/
  2. Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah De Rijcke, and Ismael Rafols. “Bibliometrics: the Leiden Manifesto for research metrics.” Nature 520, no. 7548 (2015): 429-431.
  3. “Measuring Impact,” University of Leicester, accessed October 27, 2019, https://www2.le.ac.uk/offices/researchsupport/impact/measuring-impact
  4. Dean, Alex. “Japan’s humanities chop sends shivers down academic spines. Last modified September 26, 2015. https://www.theguardian.com/higher-education-network/2015/sep/25/japans-humanities-chop-sends-shivers-down-academic-spines 

This post is also available in: 繁體中文 (CH) 日本語 (JP) 한국어 (KR)

Bookmark or share this article:
  • fb-icon
  • twitt-icon
  • linkedin-icon
  • linkedin-icon