Making government empirical

2 November 2020

By David Halpern FAcSS (Chief Executive of the Behavioural Insights Team and What Works National Advisor)

Something important is happening in government. Beneath the headlines on coronavirus, leaving the EU, and the latest on the US election, governments and public services are becoming more empirical – and more precisely – ‘experimental’. The outward sign is the growing number of What Works Centres and growing spread of robust trial methods outside of medicine. Important changes are also occurring inside government, and in the approach to how policy is made.

The uncomfortable truth is that for decades, most public policy and practice has been based on a remarkably thin evidence base. To borrow from Prime Minister Boris Johnson’s recent analogy, we may have thought that we personally, and our country, was in good shape. But if we dig a little deeper, we see that it’s nowhere near as fit as we thought. Similarly, just because we are spending £100billions on schools, policing, and healthcare, doesn’t mean that the way we are spending it is particularly effective. There is always room to find both ‘marginal gains’ and often radical improvements.

To find those improvements we need to do two seemingly simple things. First, we have to try out alternatives – to innovate. Second, we have to robustly test whether those alternatives work any better, and for who – to evaluate.

These may not seem like such difficult steps to take, but there are powerful institutional and human factors that push against them. In cash-strapped public services, finding the cash and resources to try something new or different can be a huge barrier. Taking risks will often be frowned upon, or even seen as unethical. Curiously, it’s rarely seen as wrong to carry on with current practice – from ways of looking after kids in care to deployment of police budgets – but actively testing something new to see if it would work better often raises concerns. Yet often the biggest barriers of all are psychological. We’re brilliant at telling ourselves stories about why the thing that we already do is helping. Indeed, our practices are often so habitual and familiar, that it is hard to even see that there might be other ways of doing things.

This combination of trying new things, and robustly evaluating them, is at the heart of the ‘What Works’ movement. There are now more than a dozen independent What Works Centres in the UK, together covering areas where the state spends more than £250bneach year. They are still young institutions. Recent additions have included the Youth Endowment Fund, with £200m to tackle youth crime and violence, and its sister Institute, the Youth Futures Foundation, addressing root causes of youth unemployment.

During COVID-19, a striking example of the growing impact of What Works Centres was the role played by the Educational Endowment Foundation (EEF) in shaping policy, and particularly the decision to fund the £350m National Tutoring Programme. As soon as the crisis hit, EEF rapidly assembled the evidence from previous educational shocks, and ‘natural experiments’ (such as long school holidays!) to estimate the likely impact of loss of schooling on educational attainment. It showed a likely expansion in the educational attainment gap by social class that could wipe out more than a decade of progress. But it also assembled the evidence to show the most cost effective interventions that could be deployed to offset the effect, powerfully making the case for individual and small group tuition. To cap it all, it then put in place arrangements to test variations on these approaches so that schools could further refine and improve their impact. In the words of one expert commentator, it was probably ‘the largest pragmatic education investment ever made anywhere designed to put proven programs into widespread use’.

Yet What Works Centres have also laid bare a problem. In most policy areas, there’s simply not much evidence to pull together. This is because the way in which we do policy and practice is to take a punt on an idea – whether new or existing – and do it without testing if it works. That’s why the deeper changes that are currently occurring from Whitehall to Townhall are so important. The changes can be seen on three fronts: political, practice, and people skills.

Politicians have a key role to play in creating an enabling environment. It is hard for a civil or public servant to make the case for innovating or evaluating if their elected masters don’t support it, or may even be actively hostile. Does a Minister or Mayor really want to take a risk on something new? And if they do, are you sure they really want a robust evaluation showing that it didn’t work, and that their opponents can use it against them? In this sense, Michael Gove’s speech at Ditchley this summer could hardly have been more clear:

Favourable media commentary, pressure group plaudits, peer group approval, they all drive activity. But what is less often felt is the pressure to show, over time, that programmes have been effective and enduring. Of the 108 major programmes for which Government is responsible, only 8% are actually assessed to judge if they have been delivered effectively and have brought about the desired effects….At the heart of our programme must be a focus on what works – what actually helps our fellow citizens to flourish…Government needs to be rigorous and fearless in its evaluation of policy and projects…What are the metrics against which improvement will be judged? How are appropriate tools such as randomised controlled trials being deployed to assess the difference being made? How do we guard against gaming and confirmation bias? All across Government at the moment that widespread rigour is missing.

To have an impact, words have to be translated into practice. In the world of Whitehall, top of the pile is the Spending Review. For the first time, the current Spending Review guidance has a stated objective to ‘ensure all programmes are supported by robust implementation and evaluation plans’. This is spelled out in more detail:

In order to ensure government programmes deliver for the public, it is crucial that spending decisions are based on robust evidence and evaluation of their impact. At the CSR, the government will assess the state of evaluation across all departmental spending programmes and require every department to produce plans to improve evaluation of its work. This will lead to more evidence-based allocation of public funding and better outcomes in the long term. [March 2020 Budget, paragraph 1.68]

The third element that needs to be in place is to ensure that policymakers, and civil servants in particular, have the skills and knowledge to take judicious risks and to design policy that can be evaluated. For a sense of this shift, look no further than the recent consultation on Civil Service Reform, under the new Head of the Civil Service Alex Chisholm – himself a great champion of more experimental approaches in his former role as Permanent Secretary at BEIS:

The Civil Service must create a culture in which there is more room for experimentation. This will increase the pace of learning and improvement. To find the solutions that work best, we should experiment, so that if things fail, they fail fast on a small scale. This will then allow us to iterate at pace…

There should be incentives and rewards for innovation. Testing ideas – trying and failing – are essential components of learning. This must be recognised so that the ideas of today can become the solutions of tomorrow. That innovation must be matched with rigorous evaluation. Policy decisions can be improved by ministers and civil servants in a number of ways. For example by putting greater focus on evidence and data; involving outside experts more routinely; by opening ourselves up to scrutiny and challenge.

Shaping Our Future 

Even in the midst of the COVID crisis, room was made for experimentation. For example, the Behavioural Insights Team worked closely with the Cabinet Office and Department of Health and Social Care to test alternative messaging, ranging from handwashing posters to the design of the NHS app. Alternatives were tested for public comprehension, reactions, and intent. Weaker performing alternatives were filtered out, and better alternatives taken forward into public campaigns.

Some other countries went even further. For example, faced with the dilemma of whether it was safe to keep gyms open, the Norwegians ran a well-powered trial comparing infection rates of people randomized either to exercise in gyms or at home. There were no higher infection levels from the gym  and the gyms remained open.

In sum, though clearly a challenging time for the world, and the UK, the push for more innovation, evaluation and experimentation continues to gain ground. The signs are all around. The fantastic What Works Trial Advice Panel – a selected group of more than 50 academic and government trial design specialists – has been expanded and renewed. A group of the What Works Centres have come together to set up the ‘Evidence Quarter’, co-locating just off Whitehall to collaborate and ensure the ‘voice’ of evidence is heard. And as I hope to have shown, these outward signs are matched by a series of deep and important changes occurring within government itself.

In Gove’s Ditchley speech in July, he quoted Franklin Roosevelt: ‘The country needs, and unless I mistake its temper, the country demands bold persistent experimentation. It is common sense to take a method and try it. If it fails, admit it frankly and try another’. Roosevelt made these remarks in 1932, in the midst of the Great Recession. Arguably, it was a lesson hard learnt, but then lost for a generation. With the shocks, disruptions and opportunities of the current age, it is a ‘common sense’ that we must urgently embrace.

 

Visit the hub of the social science community’s response to COVID-19.

 

David Halpern is the Chief Executive of the Behavioural Insights Team. David has led the team since its inception in 2010. Prior to that, David was the first Research Director of the Institute for Government and between 2001 and 2007 was the Chief Analyst at the Prime Minister’s Strategy Unit. David was also appointed as the What Works National Advisor in July 2013. He supports the What Works Network and leads efforts to improve the use of evidence across government. Before entering government, David held tenure at Cambridge and posts at Oxford and Harvard.

The perspectives expressed in these commentary pieces represent the independent views of the authors, and as such they do not represent the views of the Academy or its Campaign for Social Science.

This article may be republished provided you place the following statement and link at the top of the article: This article was originally commissioned and published by the Campaign for Social Science as part of its COVID-19 programme.