Targeting ads using Real Time Bidding is now illegal, and how will we know which AI is which?
Plus: What if your bank was built for AI Agents, not people, and the hidden privacy risk no one’s talking about
Hi everyone, thanks for coming back to Customer Futures.
Each week I unpack the disruptive shifts around digital wallets, Personal AI and digital customer relationships.
If you haven’t yet signed up, why not subscribe:
Next Meetup: this Wednesday, 18th June in London
Why not join us in person to talk about all things AI Agents, digital wallets and the future of the digital customer relationship:
6.30pm, Wednesday 18th June 2025
Brewdog Waterloo, (underneath) Waterloo Station, 01 The Sidings, London SE1 7BH (here)
NOTE: This is the same day as the free Empowerment Tech Workshop in London on FINTECH - find out more and sign up here.
Hi folks,
Three really important things happened this week.
First, the UK’s Data (Use and Access) Bill just cleared the House of Lords. It’s now on its way to becoming law. And it’s a huge moment for the UK Digital Identity community and for anyone working on data portability.
Second, Apple quietly added support for the Digital Credentials API. It’s a big deal because now fragmented apps can (in theory) pull data from each other, with user consent. Plus the Apple Wallet becomes a new customer channel. Lots to dig into, and more coming in a future post.
Third, I attended the PhocusWright Travel Conference in Barcelona. Big digital names everywhere, like Booking.com, Google and Tripadvisor.
AI was on every stage. With excited predictions about new discovery channels and search. But ‘identity’ and ‘wallets’? Not mentioned once.
And it’s been bugging me. Because I think two opposite things might both be true:
It’s still very early for portable, reusable digital ID and customer-side tech. Standards are still evolving, commercial models are unclear, and most businesses are scrambling to keep up with AI, let alone a whole new way of interacting with customer data…. but also;
Portable digital ID, verifiable credentials, and digital wallets won’t just be a parallel development alongside AI. They’ll soon become critical to rolling it out. Because without Empowerment Tech, without verifiable customer channels and data, we won’t be able to trust those AI-powered customer interactions at scale.
So what happens next?
It’s a frothy mix when you get powerful new data laws, Big Tech adding credential APIs, all while most businesses have no idea what’s coming next. And something remarkable starts to emerge:
Empowerment Tech becomes as important - and as disruptive - as AI itself.
So what does all this mean for the future of being a digital customer? Welcome back to the Customer Futures newsletter.
In this week’s edition:
Real time bidding on digital advertising is now illegal - and the world’s largest data breach
Small Language Models (SLMs) are the future of Agentic AI
What if your bank was built for AI Agents, not people?
The Hidden Privacy Risk No One’s Talking About
How will we know which AI is which?
… and much more
Let’s Go.
Real time bidding on digital advertising is now illegal - and the world’s largest data breach
I can’t overstate how big this is.
A court in Belgium just ruled that the ‘Transparency & Consent Framework’ (TCF) - used by Google, Microsoft, Amazon, X, and the entire tracking-based advertising industry to obtain ‘consent’ for data processing - is illegal.
Why does that matter? Because the TCF is live on 80% of the Internet via the ‘real time bidding’ (RTB) platforms.
Put simply, four fifths of all digital ads placed in the EU via RTB are now against the law.
And real time bidding just became the largest data beach in history.
We all know that the ad tech stack is a mess. And most insiders know it’s a sham. The digital advertising house of cards has to fall.
Data protection pioneers like Johnny Ryan hope it will collapse under regulation. That’s coming. But digital advertising is about to get an even harder knock when digital customer engagement flips to AI agents.
New, AI-powered customer channels will be more effective, more valuable - and if designed properly with digital wallets - far more trusted than today’s RTB models. Especially for building relationships beyond a single transaction.
Your regular reminder: my AI agent won’t be watching ads.
To see what’s coming, it’s worth looking back.
In 1900, one of the world’s biggest economies was whaling. We used whale oil to make light, but then we discovered oil and petrochemicals, and the whaling industry vanished almost overnight.
It was the same with horses.
When they were our only transport option, we lived nose-to-tail with giant, smelly animals just to get around. Then came the car, and the entire urban horse-and-cart industry disappeared in a flash.
And so the same thing is about to happen to digital advertising. Where the switch to digital agents will be just as sudden.
Yes, like oil and gas and the automobile, we’ll have new problems to solve. Like AI agent fraud, recommendation bias, network security issues, and privacy and surveillance. But the RTB ruling is another signal that it’s coming.
Another brick in the wall for Empowerment Tech.
This Belgian court decision has been seven years in the making. So bravo to Johnny and the ICCL team behind it.
Now, all we need to do is build new, trustworthy digital advertising platforms on the customer side. Based on the empowered customer, and using private, permissioned personal data. Where the customer can express her preferences, her wants and needs, and her ‘fiduciary AI Agents’ can act on her behalf in the market.
Should only take a couple of months, right?
Small Language Models (SLMs) are the future of Agentic AI
Mr W-T at it again. A banger of a post on why AI Agents are going to get smaller, more specialised, and more local.
And it’s not just wishful thinking from the privacy nerds, or those worrying about LLM energy consumption. It’s coming from NVIDIA themselves. Who says that Large Language Models:
“…are being force-fit into tasks that don’t need their full weight - wasting compute and adding failure risk”. And they are “expensive, centralised and tied to cloud monopolies.”
Ouch.
As Stuart reminds us, Small Language Models (SLMs) are a better fit. Trained for specific workflows, domain-specialism, formats and tasks. They’re more predictable and more efficient.
But some of you are already ahead of me. Because SLMs can be on-device and private. And they can be mine.
You see, the future of Agentic AI can - and must - live on the side of the customer, not just be operated for us by enterprises. Optimised for the individual’s own needs, not just the AI platform’s shareholder value.
What if your bank was built for AI Agents, not people?
“Maya walks out of a grocery store. She doesn’t realize her agent just took a five-day microloan and arbitraged a small currency gap to cover the cost.
“It knew her cash flow. It found an opportunity. It acted in her best interest.
“That’s not science fiction. That’s technically possible today.”
It’s from a new white paper on the future of banking by Gam Dias, Sergio Maldonado and Zahidul Islam.
Today, businesses have been designed to keep bots out. What if we designed companies to not only let the Good Bots in, but built the business for them?
It’s why ‘Know Your Customer’ (KYC) soon also becomes ‘Know Your Agent’ (KYA). And why ‘delegated authority’ is going to matter a great deal.
The Hidden Privacy Risk No One’s Talking About
I don’t know who needs to hear this, but correlation risk is one of the biggest data protection issues of our time.
And yet so few people talk about it. We get important discussions about data protection and consented data sharing, but much less about correlation privacy.
Debra Farber nails it in this post about Sam Altman and World ID:
“Even though World ID aims to keep users anonymous, associated metadata (e.g., when and where a scan was conducted, frequency of use, etc.) could allow pattern analysis & re-identification.
“If World ID is linked to other accounts (e.g., social media, banking), it might be possible to re-identify individuals despite its privacy-preserving design.”
This is the heart of the issue.
Because you can design a system that avoids explicit identifiers. You can obscure the data. You can even try to use zero-knowledge proofs. But if you share enough metadata, things like timestamps, locations and behaviours, then it becomes trivial to reassemble the picture.
Especially when that data flows into broader systems like banks, messaging apps and ad networks.
And it’s exactly why the #NoPhoneHome principle matters so much (see last week’s newsletter). Privacy-respecting systems should never silently phone home. No pings. No passive tracking. No metadata trails that enable correlation.
Correlation risk has to be a first-class concern in any digital identity or AI agent system. Especially the ones claiming to be “privacy-preserving” or “decentralised.”
You see, anonymity isn’t something binary. It’s much more fragile than that. And metadata breaks it faster than you think.
How will we know which AI is which?
A new research paper just came out, asking a critical question. How will we know which AI is which?
The answer, of course, is that agents will need their own digital ID too.
From everything I’m seeing in the market, and in the growing tangle of data and AI regulations, those agent IDs will have to be built on verifiable credentials. Issued and stored in each agent’s digital wallet.
More specifically, the agent’s wallet will hold:
Operator credentials - who legally operates the AI Agent System?
Delegated authority credentials - which human or organisation does the agent act on behalf of?
Can’t we just use existing identity platforms? Nope, because those systems were built for:
Single organisations with clear technical and governance boundaries (hello firewalls)
‘Static data attributes’ like job roles and system access permissions
We must remember that AI agents are dynamic. They act on your behalf. They delegate and get delegated to. They move between systems, contexts, and actors. So we’re going to need a new way to identify, verify, and manage our AI agents.
This paper on “Zero-Trust Identity Framework for Agentic AI” is an important step in that direction. Worth your time.
OTHER THINGS
There are far too many interesting and important Customer Futures things to include this week.
So here are some more links to chew on:
Post: Acting on behalf of others: delegation, consent and the messy reality READ
Article: Apple’s new Digital Credentials API could fundamentally change digital identity READ
Report: Deloitte UK digital trends and consumer insights READ
Post: How many AI Agents will we each have? READ
Idea: Stop Anthropomorphising Everything about AI - It’s just Clever Statistics READ
Report: Lifting the lid on the UK digital identity ecosystem: Digital Identity Sectoral Analysis 2025 READ
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at LinkedIn.
And if you’re not yet signed up, why not subscribe: