Why AI agents can only be half-trusted... until now
There are two sides to digital trust, and today we're missing half the picture - but that's about to change
Hi everyone, thanks for coming back to Customer Futures.
Each week I unpack the disruptive shifts around digital wallets, Personal AI and digital customer relationships.
If you haven’t yet signed up, why not subscribe:
NEXT MEETUP: AMSTERDAM, 21 JANUARY
When: 6-8pm, Tuesday 21 January 2025
Where: W Louge (rooftop), W Hotel, Spuistraat 175, Amsterdam, Netherlands, 1012 VN
Who: Regular readers of Customer Futures, friends and colleagues... anyone interested in the future of being a digital customer
I’m delighted to say that this event will be sponsored by Digidentity.
Happy New Year folks, and welcome back to the first of this year’s Customer Futures newsletters.
This week we take a deeper dive into:
The arrival of consumer-side AI agents
Asking the right questions about our data
Trusting agents
Is your AI really personal?
When our agents really work for us
So grab a nutritious January fruit smoothie and a comfy chair.
And let’s go.
Agents, agents, everywhere
Following excellent words like ‘chav’, ‘selfie’, and ‘rizz’, I wonder if ‘agent’ might soon become the Oxford Dictionary Word Of The Year.
AI Agents are showing up everywhere. At least if you are paying attention.
Only last September, Marc Benioff, CEO of Salesforce, said “We want to get a billion agents with our customers in the next 12 months”.
It’s a bold claim. How many CEOs have talked about that kind of scale of a brand new product within a year? Well, he has eight months to go and at this point, I’m betting he might be right.
But whether they arrive that fast or not doesn’t really matter. What matters is the impact they have. And for employees, it’s going to be huge.
Because any business process that is repetitive, or frankly learnable, may soon be automated with a human in the loop. It turns out that AI won’t take 50% of all jobs, as some have predicted.
Rather, AI agents will take 50% of each job.
But that’s not what I want to write about this week. Instead, we’ll look at what happens when we flip those AI agents over to the customer side. And critically, what that means for digital trust.
Away from the enterprise, it’s not been hard for people to see that complex, repetitive and predictable tasks happen on the customer side too.
It’s why OpenAI, Google and Anthropic have already released their first versions of their consumer agents (specifically worth checking out ‘Claude 3.5’ and ‘Mariner’).
And much of the customer-side value is obvious from a mile away.
Managing your diary. Planning a travel trip. Personal productivity, like handling tasks and notifications across all your services (just think about your morning routine to check 142 messages across 18 apps and services).
Then there’s bookings things. Ordering things. Updating and managing information. Opening, switching and handling accounts. All given the right permissions and your login details.
And that’s before we get to personal recommendations.
Today’s digital recommendations are pretty flimsy (“People who bought this bought that” and “Given you live in district X, earn Y, and know people who recently bought Z, you might want ABC” and so on).
All nonsense of course. But it’s the best brands can do right now.
But AI Agents on the customer side are going to be smarter than that. More personal. Better at understanding who you are, where you are, what you like, what you need, and your preferences. Then not only make recommendations, but make things happen. Doing the doing, the booking, the organising.
Regular readers might be wincing right now.
Because here’s the snag. AI Agents are going to need continuous access to your personal data. They’ll need to continuously learn from you and your contexts. And will need entirely new levels of digital trust.
Which, to put it mildly, we don’t have.
But there’s a deeper issue. Because today AI agents can only be half-trusted.
Why?
Most people think about digital trust in terms of data security and data protection. Where is the data, who has access, what has been retained? Can the individual get a copy? It’s all GDPR- and CCPA-type stuff.
But that’s only half the story. Everyone misses the other 50%.
Trust-as-motivation.
In whose interests is the AI acting? What’s the commercial incentive for recommendations? How does liability work, and who is carrying the can when things go wrong?
This matters. Because A LOT of people are about to share A LOT of personal data with A LOT of Personal Agents.
Let’s dig deeper.
Asking the right questions
Would you say that today we have a good enough grasp of the value and risks of AI Agents? Nope.
Forget having good answers, do we even have good questions?
The data protection and security side of things are easier:
WHAT data are we feeding AI agents? WHAT data models are underneath?
WHERE is the data collected? WHERE it is processed? WHERE do the results go?
WHO has access to it? WHO runs the analysis, and WHO else gets to see the results?
HOW are the actions and insights revealed? (this becomes particularly important if the AI agent is working on health data, money data, and relationship data).
So far so good.
But the other half - the questions around motivations and incentives - are much harder:
WHY are we using these AI agents? To create value? Help us avoid risk? Reduce suffering? Save us money?
WHO decides which data to feed the models? And WHO provides the data? What’s in it for them?
HOW do AI Agents cover their costs?
WHAT happens when a recommendation is acted on? Who benefits?
This is just a short list I came up with over a coffee. So who’s asking these questions of the big AI platforms? Who’s paying attention to this powerful branch of AI that’s going to be in our homes and cars before you can say ‘assistant’?
Or are the AI agent product teams just going through a standard set of checks with their legal and compliance departments?
Trusting agents
AI agents are of course just one part of the ‘Empowerment Tech’ (ET) story. I’ve written much more about that here and here. Where ET includes personal data storage, Personal AI, digital wallets and digital ID.
But let’s look at this AI agent trust thing again.
Agents will need access to your rich personal data to make sense of the world, and to be useful. Consider it like giving someone access to your own home.
Would you let just anyone in? Different people would get different rules, right?
Strangers? Nope.
A plumber? Yes, but only to do One Specific Thing.
An estate agent? They only get to look at things and maybe take pictures. Don’t touch anything.
A cleaner? Now that becomes more personal. They get to move things around. But also follow rules about what not to do.
A friend? They won’t snoop around the bedroom, but they can help themselves to things in the fridge. Mi casa su casa.
The point is that each is a different relationship. Each has context. And each matters differently.
So should your ‘AI’ agent afford the same rights as an ‘estate’ agent (look but don’t touch)? Or is an agent more like an executive assistant (help me manage my diary and respond to emails)?
It feels like we’re going to need to understand and map the boundaries for this stuff, and quickly.
Personal or not
It comes down to this difficult word ‘Personal’. Here’s a simple test to see if an AI agent is really working on the customer side.
Ask: is it ‘PERSONAL’ or ‘PERSONALISED’?
I know. It sounds like I’m being pedantic and splitting hairs. But here’s the difference:
Personal means mine. My things. My places. My reciepts. My needs. My context. My data.
Personal means intimate. Knowing a lot about me. Maybe even detecting things about me that I can’t see (or choose to ignore). Just like your best friends can spot your change in mood before you do.
Personal means trusted-by-default. Rachel Botsman describes trust as “a confident relationship with the unknown”. If something is ‘personal’, by definition it means already known. It’s already trusted. It’s implicit.
Whereas…
Personalised means someone else’s thing. A standard version of a product or service, but somehow tailored to me, defined by my preferences. (Note that those preferences are almost always inferred, rather than stated. The service being personalised looks at my past behaviour and purchases, rather than asking me directly.)
Personalised means other. The thing comes from someone else, somewhere else. We need to take steps to decide if it is trustworthy. Which data platform, which computer, which company?
Personalised means it’s also serving someone else. In whose interests is the service acting? On what basis is it making a recommendation? How does it make money? Does it have enough experience/ expertise to make a solid judgement?
People joke that “The Cloud just means someone else’s computer.” It reminds us that these cloud platforms are elsewhere. But which jurisdiction, and which country? Does it matter?
Most of the fuss about data transfers outside the EU - lugging personal data over to the USA for processing for example - has been a fight about such digital (mis)trust. And as Max Scrhem’s famously pointed out, when the data is transferred somewhere else, we don’t really know how that data is being protected (or not).
So let’s look at Personal AI in the same light.
Are these tools really ‘personal’? Or are they really just ‘personalised’?
Are they yours or someone else’s?
If ‘cloud means someone else’s computer’, then perhaps ‘Personalised AI means someone else’s LLM.’
These new personalised AIs are going to become breathtakingly intimate with us. As Yuval Noah Harari has predicted since at least 2017, these AI are going to know us better than we know ourselves.
So what’s stopping from an AI really being ‘mine’? Is it physics (it’s too slow, too complicated to run the AI on your local phone or laptop)? Or economics (it’s cheaper to run the AI centrally)?
Or is it something else like the specific skills needed to make it work? Will it be too technical to run? We should remember they said that once about software applications. Now just look at the App Stores.
I believe deeply that we are going to need to differentiate ‘Personal AI’ (mine) from ‘Personalised AI’ (theirs).
Just like in the movie ‘Her’. At the end of the film, you find out that the AI is really personalised, not personal. (If you have seen the film, you’ll know what I mean).
It turns out that your Personal AI isn't really Personal. It’s personalised.
So will you trust it the same way?
When our agents really work for us
In law, every business has a fiduciary responsibility to its shareholders, not to its customers.
And in most countries, corporations have long been recognised as having the same rights as ‘natural persons’. Meaning businesses can hold property, enter into contracts and so on. But that also means they can hold liability. Very often ‘limited liability’. Together these principles mean that it’s a company’s shareholders that are put first, not customers.
And so in a world of AI agents, will the agents really work for people? Or rather for shareholders? We’ve seen this movie before.
Yet, I’m optimistic.
Why?
Because there’s been a new wave of thinking emerging around ‘Fudiciary AI’. Where LLM and AI Agent platforms have a fiduciary responsibility to the individual. Just like a doctor has a fiduciary responsibility to her patient. Or a lawyer to her client.
To act in, and represent, the individual’s best interests. Not the business's.
This new perspective - and legal framework - presents a huge breakthrough for Empowerment Tech.
Because we can finally address both dimensions of digital trust.
Trust-as-data-protection. Many regulators have already pointed out that AI tech is just another type of tech, and isn’t a special new category for data protection. We have existing data rules and regs to keep these tools in check. Many of you may have missed that Italy just fined OpenAI €15M - Million! - for alleged GDPR breaches. More like this to come, for sure.
Trust-as-motivation. This is the missing piece. But where Personal Fiduciary AI might give us a new way to be clear about motivations. In whose interests the AI agent systems are acting.
Sometimes the most appropriate AI recommendation should really be:
“No, don’t buy this product… you can’t afford it”
Or
“Yes, this is the cheapest and fastest one to arrive, but here’s what happens if you put this money towards your saving goals”.
If you want to dig in further, worth reading some of the latest posts by Consumer Reports, Dazza Greenwood and Iain Henderson.
For now, please make sure you are paying attention. Not just to the latest AI tech, but to both sides of digital trust. Not just to data protection, but also to the incentives.
Get ready folks, AI agents are about to arrive on the side of the customer.
But this time, it’s personal.
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at LinkedIn.
And if you’re not yet signed up, why not subscribe: