
For some, the advent of the worldwide web is still fresh in the memory. But technological leaps seem to happen with ever-increasing frequency, and we now all find ourselves blinking in the brilliant light at the dawn of the age of AI. At the Advertising Standards Authority (ASA), we’ve donned the sunglasses and rolled up our sleeves, and AI is already proving a game-changer in how we regulate.
The lightning speed with which AI has developed and integrated into our everyday lives inevitably raises legitimate concerns. What does it mean for jobs, data protection, originality, creativity, copyright, plagiarism, truth, bias, mis- and disinformation and what we think is fake vs real?
These are undoubtedly important issues to grapple with. But the technology also brings multiple benefits. As was the case in the mid-1990s with the launch of search, web browsers and online shops, there were innovators, early adopters, cautious sceptics and technology resisters. AI is no different. The ASA is firmly in the “early adopter” category. Four years ago, we appointed a head of data science and began building our AI capability; AI is now central to our transformation into a preventative and proactive regulator. Around 94 per cent of the 33,903 ads we had amended or withdrawn last year came from our proactive work using our AI-based Active Ad Monitoring system. The ability to be front-foot and take quick and effective action is crucial when regulating the vast online ecosystem. AI gives us much greater visibility of online ads.

Last year, our system scanned 28 million ads with machine learning and, increasingly, large language models finding the likely non-compliant ads we’re interested in. That was a tenfold increase on 2023. Our target is to scan 50 million ads this year. AI-based tools are embedded in our work to help us monitor and tackle ads in high-priority areas and are now used in most of our projects, including our work on climate change and the environment, influencer marketing, financial advertising, prescription-only medicines, gambling and e-cigarettes. It’s enabling us to carry out world-leading regulation – monitoring, identifying and tackling potential problem ads at pace and scale. Take one example: our ongoing climate change and environment project. Following high-profile and precedent-setting rulings against major players in various industries, we’re now seeing businesses adapting and evolving to make better evidenced, more precise green claims.
Monthly sweeps using AI show high levels of compliance. Following our 2023 airline rulings on misleading “sustainable” and “eco-friendly” claims, of the circa 140,000 ads we monitored, we found just five that were clearly non-compliant.
Importantly, we’re not removing humans from the equation. Our experts are and will remain central to our regulation. While our AI capability has dramatically improved the efficiency of our monitoring (weeding out the millions of ads that stick to the rules and aren’t a problem), it filters and flags potential problem ads to our human specialists for their expert assessment. AI is assisting rather replacing our people. There are a lot of open questions about how AI will impact industries, positively and negatively. And that’s certainly true of advertising, as ever at the forefront of technological change.
We know that the use of AI is already changing advertising. There are big efficiency and effectiveness gains in play. Lower-cost ad ideation and creation, hyper-personalisation and improved customer experience. Quicker and better media planning and buying. Get this right and ads will be cheaper to make and send, and be more engaging and relevant to receive. UK businesses and the British economy will be boosted. But in all of this, responsible ads must not be sacrificed at the altar of advances in technology.
We’re well aware of the many potential benefits and problems AI poses for advertising. Think back to the story from Glasgow, where AI-generated ads promised a Willy Wonka-themed event that wasn’t quite as advertised. The advertising of certain AI products and services certainly throws up broader ethical considerations. On our radar are ads for AI tech offering mental health support (substituting human therapists), essay-writing tools that pass work off as original, and chat boxes that act as a partner or friend. We don’t regulate the products themselves, but in all these examples there is potential for ads to be misleading, irresponsible or harmful. How can businesses use AI safely and responsibly? What does that mean for advertisers?
Our media and technology-neutral rules already cover most of the risks. Ads can’t mislead, a principle as old as the hills. In the past, that might have been using photo-editing software; today, it might be through generative AI. Adverts must not be likely to cause harm or serious or widespread offence either. Generative AI might be an unsurpassed pattern-recogniser, but it’s not a human and may well miss the nuance of judging prevailing standards in society when producing ad content. Advertisers who harness AI can’t abdicate responsibility for the creative content that it produces. That’s why we urge businesses to be careful: use the good of AI, but avoid the bad. Put in place human checks and balances.
At the ASA, we’re determined to take full advantage of technological advances, developing our Active Ad Monitoring system further and making even more use of large language models to speed up review of ads. Actively experimenting with how these tools can make our internal processes more efficient. And continuing to keep a close eye on how AI is used in advertising.
We are witnessing the next technological revolution that will change society in ways the internet did, perhaps even more. We can say with confidence that our use of AI is already delivering world-leading advertising regulation.