
Gaia Marcus is the director of the Ada Lovelace Institute. Prior to joining, she was the deputy director (advanced analytics and local capabilities) at the Department for Levelling Up, Housing and Communities. The Ada Lovelace Institute is an independent research institute funded by the Nuffield Foundation.
What first attracted you to AI as a policy area?
I most enjoy roles that sit between technology and people, motivated by both an excitement for the new and a desire for fairness and opportunity for all. My first ‘real job’ was as social network analyst at the RSA, where I worked with local communities to translate quantitative and network research into projects boosting wellbeing and social connections. Having gone deeply into data policy as head of the UK’s National Data Strategy, I strongly feel that AI policy is one of the biggest challenges of our time. How do we grasp the potential of these new technologies to build futures we all want to be part of?
When you look at developments in AI in the public sector, what most excites you?
I’m always excited by efforts to solve real societal problems using all the possible tools to hand. But tools alone aren’t solutions. Deploying AI in the public sector needs trust and legitimacy – and listening to the public is the only way to ensure these technologies work well, work for everyone and work in context. To me, the most exciting conversations are those involving the public’s hopes, fears and aspirations about AI and data-driven technologies.
What frightens you?
“Frighten” is a big word, but there are things I worry about. First, AI is a global value and supply chain and needs to be governed accordingly. Currently we are over-reliant on a few tech companies at most steps of this supply chain, who are largely marking their own homework and setting the narrative “weather” on policy.
Second, the deployment of AI often outpaces our ability to govern it effectively – from deepfakes to AI assistants and facial recognition. We need our safeguards to catch up, especially when it comes to use in the public sector. The public can’t easily opt out of using public services and tends to hold them to a higher standard if things go wrong. And finally, we’re likely to see uneven advances in AI capabilities, with some progress in areas like quantitative reasoning but less in realising bold claims about productivity gains. We could see real disruption to our livelihoods, relationships and how we trust information.
What’s the biggest misconception about plans to use AI in the public sector?
That it will magically solve entrenched, complex problems. AI influences and is influenced by the context it is used in, often with unintended consequences.The effectiveness of an AI tool in the public sector depends on its interaction with existing social systems, values and trust. And often the underlying data just isn’t good enough.
Can you give an example of where AI has been employed for public good?
It depends on the type of AI! Advancements in machine learning have improved weather forecasting, as well as speech-to-text and translation tools. In 2024, AI also contributed to two Nobel Prize-winning achievements. The question is how we ensure the benefits are evenly distributed and contribute to the betterment of society.
What would you most like to see from government on the AI agenda?
To “be in the driving seat of AI”, we need to know the brakes work. Our research shows the UK public hold nuanced views on AI, but expect action from government on policy – 72 per cent say that laws and regulations would increase their comfort with AI.