Earlier this month, the British Academy, the UK’s national academy for humanities and social sciences, launched an innovative process for awarding small research grants. The Academy will use the equivalent of a lottery to decide between grant applications that their review panels will compare to other criteria, such as: B. the quality of the research methodology and the study design, as equivalent.
The use of randomization to decide between grant applications is relatively new and the British Academy is part of a small group of funders piloting it, led by the Volkswagen Foundation in Germany, the Austrian Science Fund and the Health Research Council of New Zealand. The Swiss National Science Foundation (SNSF) has arguably gone the furthest, deciding at the end of 2021 to use randomization in all tiebreaker cases across its entire funding portfolio of around 880 million Swiss francs (910 million US dollars).
Swiss donors are giving away funding decisions
Other sponsors should consider following in these footsteps. A study by the Research on Research Institute in London shows that randomisation is a fairer way of awarding grants when applications are too tight (see go.nature.com/3s54tgw). This would go some way to allaying concerns, particularly among early-stage researchers and those from historically marginalized communities, that there is a lack of fairness when grants are awarded through peer review.
The British Academy/Leverhulme Small Grants Scheme awards around £1.5 million (US$1.7 million) each year in grants of up to £10,000 each. Despite their relatively small size, these are valuable, especially for novice researchers. Academy grants can only be used for direct research expenditures, but small grants are also typically used to fund conference travel or to purchase computer equipment or software. Funders also use them to find promising research talent for future (or larger) programs. For these reasons and more, small grants are competitive – the British Academy says it can only fund 20-30% of applications in each funding round.
The academy’s problem is that its grant examiners say twice as many applications pass the quality threshold, but the academy lacks the resources to affirm all of them. So it’s forced to make decisions about who to fund and who to turn down – a process prone to human bias. One way to reduce unfairness is to decide who should be promoted by entering tie-breaker entrants into a lottery. The solution is not perfect: Studies show that biases still exist when reviewing grants1,2. But biases, such as crediting more experienced researchers, people with recognizable names, or people at better-known institutions, are more likely to creep in and influence the final decision when the cases are too obvious to name.
It’s good to see research-based innovation in grantmaking – even a decade ago it was highly unlikely that lotteries would have entered the conversation. That they have it now is thanks in large part to research and particularly insights from research funding studies. Funders need to monitor the impact of their changes – specifically assessing whether lotteries have increased bidder diversity or made changes to reviewers’ workloads. At the same time, researchers (and funders) need to test other models for awarding grants. Researchers call such a model “egalitarian” funding, in which grants are distributed more evenly and less competitively3.
Innovation, testing and evaluation are all critical to reducing bias in grant awards. Using lotteries to decide tie-break cases is a promising start.