Last night, the Times Higher Education ‘Blue skies ahead?’ debate brought together government science minister Lord Drayson and a panel of young scientists, including Oxford Physics’s own Suzie Sheehy, and the Twittersphere, to discuss the future of UK research funding.
The discussion was fairly unfocussed and more than a little ranty, as a handful of disgruntled scientists and teachers proceeded to lambast STFC, the new ‘impact’ assessment integral to grant applications, and science education and outreach. One couldn’t help but feel a little sorry for Drayson, who seemed rather to have been the victim of an ambush.
The biggest and most interesting question up for debate was that of how we should go about allocating money to scientific research. However, though there was plenty of unnerved squirming over research grants drying up, no-one addressed the big question of how to justify how much funding science should get, and how we should divide that between disciplines.
The biggest target of ire was (probably) the new-fangled necessity to justify the ‘impact’ of your research as part of the grant application process. Drayson justified this by saying that the statements provided helped fight the corner of researchers: ‘Impact assessment,’ he said, ‘is needed to help defend the science budget against those who would rather spend the money on something else.’ The question, of course, is how many of the hundreds of thousands of words of impact assessments written will actually make it into a given parliamentary debate—or, less cynically, how we can condense the reams of qualitative information provided into a useful measure of the benefits of our aggregate research strategy.
Many of the comments from scientists deploring the introduction of ‘impact’ assessment seem to be coming at it from the perspective of the persecuted: the implication seemed to be that this new criterion would see their research being cut. Firstly, this confuses me: does every scientist think that they are doing research with abstract and unquantifiable benefits? Is there a crack army of buzzword-tastic, short-term impactful, applied researchers waiting in the wings to come in and snatch all the funding from beneath the highly theoretical old guard’s noses? Since there is no accompanying overall cut in research funding—other than, with unfortunate timing, the ones which were coming anyway—why is everyone expecting that it’s their research which will be dropped?
It seems to me that the most likely outcome won’t be a significant restructuring of the research landscape: surely if you have the expertise to propose a research programme and the lab to back it up, writing a couple of pages about why your research may have ‘impact’ isn’t much of a challenge—and, given that this will be peer-reviewed by sympathetic scientists, explaining that your research is fundamental and hard to quantify will probably elicit a degree of sympathy; scientists understand that basic research is inherently unpredictable.
So, then, if this isn’t a big deal, the question is why we’re bothering at all. The vocal part of the science community, in this debate at least, want evidence that this ‘impact’ thing will help science. Drayson hits back that he wants evidence it will harm it, and scientists hit back back, saying that we can’t prove a negative.
What we need, if we’re to answer the big question of how to assess research money allocation methodologies, is some kind of metric. Against the view popular amongst scientists that some research outcomes are ‘priceless’, or at least totally unquantifiable, we must contrast the pragmatic need to assess how much funding science should receive overall versus defence, health, education and, ultimately, private expenditure as moderated through taxation; and then, how that pot should be split between physics, chemistry and biology, obviously-applied and possibly-useless, and so on. We need a way to measure the benefits of research—with evidence-based, probably-enormous, non-Gaussian error bars. If such an exercise is totally futile, let us find out by the scientific method, and not simply make hysterical objections to a well-intentioned, if possibly ill-founded, government initiative. We need to be able to make an objective assessment of impact statements versus the current system versus putting all the grant applications in a big spreadsheet and throwing darts at it…and so on. Without some numerical evidence, the debate degenerates into status quo bias and soundbites.
If such an assessment does indeed turn out to be impossible—and it’s certainly not inconceivable that it would be—then we need to ask ourselves the complex ethical question of what society is morally obliged to do when we don’t know what to do.
On a less intellectually grand note, I was also a little confused by all the comments regarding outreach—no-one seems to be able to get funding for their ‘out-of-the-box’ youth inspiration schemes. Call me woefully inside-the-box, but I don’t think it’s practical to take every A-level student over to CERN, and I can’t see many ways of engaging young people which aren’t basically talks, leaflets or posters. And, anecdotally, our talk and leaflets, explosions and beachballs science show Accelerate! got some dosh from none other than the squeezed STFC.
And finally, to finish with a dash of cynicism, Lord Drayson: though I am falling foul of my own strict criterion of requiring evidence, might I suggest that to be taken seriously by scientists, phrases like ‘a more flexible framework for assessing excellence’ should be purged from the lexicon at all costs!