Contact tracing apps highlight bigger challenges with tech, trust and power
3 July 2020
By Chris Yiu (Executive Director for Technology and Public Policy, Tony Blair Institute)
The pandemic has brought more technology into all of our lives, from family gatherings on Zoom through to doing the weekly shop online. It has also played an important role in the policy response, by arming policymakers and public health authorities with insights about how the virus is spreading and tools to help stop it in its tracks.
Early in the lockdown one technology generated a lot of excitement: digital contact tracing apps. Many people hoped that these would transform countries’ efforts to break the chain of transmission and get infections under control. In the UK, the Secretary of State for Health and Social Care promised a world-class app would be rolled out by mid-May.
It’s now July, and it’s fair to say things have not gone according to plan. After pressing ahead with a design that experts warned would be difficult to implement, the government has abandoned the first version of its app. A second iteration, based on a framework provided by Apple and Google, is expected by the winter.
The benefits of a contact tracing app remain to be seen – they are a new technology and so by definition haven’t been tested at scale before. Nevertheless, given the enormous economic and social costs of lockdown, having an app as part of the public health response is a sensible option to pursue.
Setting the actual development of the app to one side, the UK’s experience highlights three important lessons about public sector technology and the challenges of getting an idea from the drawing board to widespread adoption. First, policymakers need to consider the reality of how tech fits into people’s lives. Second, public trust in tech is hard to build but easy to break. And third, for better or worse, some critical policy decisions are now beyond the reach of any government.
One of the pioneers of digital contact tracing was Singapore, where the government built and open-sourced one of the first Bluetooth-based proximity apps. But it was clear from the outset that there were practical problems: to work properly the user’s phone needed to be unlocked and the app kept on screen at all times.
By the time the UK got round to building its app, much of the policy debate revolved around whether proximity data should be centralised by the NHS to gain a comprehensive national picture, or decentralised to provide maximum privacy for individuals.
But whatever the data model, a contact tracing app remains a theoretical exercise unless a significant number of people use it. One of the key insights from the last decade of consumer tech is that you only get a lot of people signed up to something new if the benefits are clear and onboarding is easy.
Unfortunately, neither of these conditions came close to being met.
In the absence of a mass testing programme, people using the app faced being sent into isolation on the basis of someone else self-reporting symptoms. Worse still, the prospect of getting the all-clear and being released early were slim. Under these sorts of conditions, you can see why some people might have concluded that the app was more trouble than it was worth.
And because the initial approach was to reject the framework provided by the tech companies, the app was on track to be harder work to obtain (users would have to manually search for it in the app stores rather than being prompted to install it), and massively inconvenient for people who cared about using other apps or their phone’s battery life. It also turned out to perform far worse than anyone expected when it came to actually detecting other phones.
Trust and literacy
The handling of the data debate exposed deeper questions about public trust in technology. The government maintained that centralising proximity data was essential for properly optimising contact tracing alerts and modelling the spread of the virus. Privacy campaigners countered that the fundamental policy objective could be achieved without pooling so much personal data.
What is clear is that the reasons for choosing one approach or the other, and the rules of engagement around people’s personal data, were never set out in simple terms for the public to consider. Instead policymakers asked people to deal with a lot of cognitive dissonance: assuring them that their data was anonymous whilst also asking for location data and other information; saying the app was about fighting COVID-19 but also keeping the data for 20 years.
One side effect was a proxy debate about who was more trustworthy: the NHS or the tech companies? This is exactly the sort of thing that helps no one – it seems pretty reasonable to believe medical professionals have your best interests at heart, and at the same time to believe that Apple and Google know what they are doing when it comes to building apps that work.
The counterproductive consequences of this debate are even clearer now that the UK is working on a new version of the app. This will rely on software updates that Apple and Google have recently pushed to end users, that provide specific operating system functionality for contact tracing apps to work properly.
These updates do nothing unless an official health authority app is installed, and require users to opt in. But social media is awash with people sharing conspiracy theories that the tech companies are secretly tracking people’s COVID-19 status without their permission. Posts and tweets sensationalising these claims have been shared hundreds of thousands of times; others explaining there’s nothing untoward have reached only a fraction of the same audience.
The final lesson from the contact tracing app debate is also the most profound.
The problems encountered by early attempts at digital contact tracing arose because iOS and Android impose some (differing) restrictions on what apps can do, particularly in relation to Bluetooth, for reasons of both security and power consumption.
Lifting these restrictions makes contact tracing apps easier to implement, but also opens the door to more problematic uses of the technology for population tracking and surveillance.
The solution arrived at by Apple and Google was to lift the restrictions but only under a tightly defined set of conditions: contact tracing apps had to protect privacy by ensuring that proximity matches were never centrally collated, there can only be one official app per country, and the entire operating system infrastructure will be dismantled when the pandemic is over.
It’s not surprising that the tech companies chose to impose these rules without exception: they have no desire or incentive to start refereeing which countries should be trusted with greater latitude. In this specific case, you might even agree that the approach they require is the best way to balance the benefits of digital contact tracing with the risks to personal privacy.
But from a geopolitical perspective we are in new territory. A significant policy decision about how governments respond to an unprecedented public health emergency has been made unilaterally by two of the world’s biggest tech companies. And the countries that chose to go their own way – presumably hoping they could ultimately pressure a change of direction – have discovered the hard way where power in today’s global tech arena really lies.
Chris Yiu is an Executive Director at the Tony Blair Institute, where he leads the Technology & Public Policy team. He is also a member of the advisory board at Digital Leaders, curator of the AI website DeepIndex, and a trustee of ENABLE Scotland. You can follow him on Twitter @clry2.
The perspectives expressed in these commentary pieces represent the independent views of the authors, and as such they do not represent the views of the Academy or its Campaign for Social Science.
This article may be republished provided you place the following statement and link at the top of the article: This article was originally commissioned and published by the Campaign for Social Science as part of its COVID-19 programme.