Will you give your car keys to an AI agent?
Plus: WhatsApp just hit 3 billion users, but travel companies are still sending PDF quotes by email
Hi everyone, thanks for coming back to Customer Futures.
Each week I unpack the disruptive shifts around Empowerment Tech. AI Agents, digital wallets, Personal AI in and the future of the digital customer relationship.
If you haven’t yet signed up, why not subscribe:
Hi folks,
There’s an old joke in decentralised identity circles.
For the last few years, it’s felt like we’ve been working in ‘Dog Years’. Where one year of tech progress actually feels like seven.
But the recent AI cycles have broken that model entirely. I think we’re now in Mouse Years.
What we thought might take 18 months is now happening in weeks.
I don’t mean the speed of adoption. That will take time. Rather, it’s about how fast the market is reorganising underneath us.
Entire layers of digital commerce, digital identity, and customer interactions are all shifting at the same time.
Let me show you what I mean.
What matters isn’t just the number of recent announcements. It’s that every layer of the stack is moving at the same time.
In the last couple of weeks alone:
Stripe announced Machine Payments Protocol (MPP) - if you’re paying attention, it’s a big deal for the ‘Intention Economy’ READ
Bonus: Worth reading Simon Taylor’s breakdown of the announcement, why it matters, and where the proposed new intent protocol will shows up READ
Mastercard and Google announced ‘Verifiable Intent’ - using Verifiable Credentials no less (this is a huge moment for the adoption of VCs, if they can scale it) READ
Walmart kicked out OpenAI, and then embedded its own AI assistant (“Sparky”) inside ChatGPT READ
Bonus: “Ask Amazon’s Rufus if the blender you're looking at is cheaper on Walmart.com” READ
Google’s Universal Commerce Protocol (UCP) got an upgrade, adding multi-item carts, real-time updates, and of course, identity READ
Apple’s Age Verification showed up in the wild with Claude AI READ
Anthropic surveyed 81,000 people about their expectations and uses of AI, the biggest consumer study of its kind - with important findings around privacy, personalisation and shopping READ
Bain carried out benchmark tests with multiple airlines and Online Travel Agents (OTAs), booking flights and travel via the AI Majors - with some surprising results about where the quality data really comes from READ
Adyen published a paper called “Agentic Commerce Has an Infrastructure Problem” - and expressly called out a lack of portable digital ID, and the need for a new trust framework built for human-initiated transactions READ
Meanwhile, every - and I mean every - digital identity company and wallet business is repositioning itself as an AI agent identity platform…. giving agents their own 1st class IDs, issuing credentials to software not just people, building reputation systems for non-human actors, and reframing IAM for a world where actions are delegated…
….and breathe.
Can you remember a tech cycle as busy, as fast, and as deep as this? Nope.
Notice that I didn’t even mention the rapid developments around OpenClaw, Government Digital Wallets, Proof Of Personhood or Advertising in AI chats.
But folks are still missing something important.
Yes, payment networks are redesigning how transactions are initiated. Yes, tech platforms are embedding agents directly into customer journeys. And yes, identity providers are racing to define how agents are recognised and trusted.
But the mistake is to see these as isolated moves.
Zoom out, and you see it’s a land grab for the control layer of AI-driven transactions. But zoom out again, and you see they are actually some of the early pieces of a new operating model for digital customer relationships.
But here’s what also matters.
Even for those of us tracking this stuff up close, it’s fantastically difficult to keep up with this firehose of progress.
For most teams out there, balancing the business as usual stuff, delivery pressures, quarterly targets, customer operations, and endless tech shifts, it’s almost impossible to process all of this in real time.
Meaning that the risk isn’t missing the latest headline. Rather, it’s missing the pattern.
In any normal week, I would have unpacked each of these announcements and developments. Writing up another Customer Futures ‘Deep Dive’ for you. Looking at what’s happened, why it’s important, and what you need to be paying attention to.
But frankly, it’s all moving too fast to keep up, and you wouldn’t have the time to read it all anyway. (Ironically, you might have used Claude or ChatGPT to summarise my post for you.)
And these shifts aren’t showing signs of slowing down any time soon.
I don’t think folks are prepared for what’s coming. Or how fast.
So I’ll say it again. It’s never been more important to understand the future of being a digital customer.
So welcome back to the Customer Futures newsletter.
In this week’s edition:
With verifiable credentials, the edge cases ARE the use cases
My worry about adopting AI Agents? Approval Fatigue
Will you give your car keys to an AI agent?
… and much more
If you are as excited and exhausted as I am right now, then grab a double espresso, strap in, and Let’s Go.
With verifiable credentials, the edge cases ARE the use cases
When digital wallets and decentralised identity first gained attention, one of the many use cases discussed was digital birth certificates.
On paper, it’s hugely attractive and interesting. And speaks to the very definition of an ‘anchor’ digital ID. But in practice, it’s a terrible place to start.
Why?
Because it sits at the intersection of multiple hard problems:
Needing lots of other identity checks first (you need to prove many other things beforehand, like other facts and attributes about the person requesting the certificate, or the parents’ identities)
The low frequency use of the birth certificate - when was the last time you needed to use yours? Many years ago, if ever? There are always a ton of other use cases that get prioritised first…
The commercial model - who would pay and why?
The data sharing model - where does it get stored, and how do you access it, and what does sharing it look like?
But guess what. Once you have a government-related digital wallet, many of these things fall into place.
The network effects of identity data kick in. We can start verifying lots of attributes about people, and even each other, including parents and their children.
I’ve long said that the edge cases are the use cases.
Meaning when you solve for the ‘core capabilities’ - like a standard wallet, a standard credential format, and a standard flow to request data - then the barriers to adopting the ‘long tail’ of less obvious, low-frequency credentials fall away.
And when deployed safely, sensibly, and with citizen support, it’s now widely recognised that digital identity fast becomes Critical National Infrastructure. Where the benefits to the economy outweigh the short-term costs.
And that now applies to digital citizen credentials, too.
So I’m excited to see Australia’s NSW Digital Wallet team announce their new Digital Birth Certificates as verifiable credentials. Soon available to 16-18 year olds.
Here’s why it matters.
Because to some extent, much of the hard work to get us here is nearly behind us. The standardisation efforts. The format efforts. And the integration efforts with Government Digital ID.
Once this kind of core digital infrastructure is in place, the market dynamic changes. Standard wallets, standard credentials, and standard request flows remove friction across the system.
And the long tail of other low-frequency but potentially high-value credentials becomes viable. Like a birth certificate. And soon, many more.
The ‘edge cases’ stop being edge cases. They just become use cases.
And I can’t wait to see where the NSW Digital Wallet goes next.
My worry about adopting AI Agents? Approval Fatigue
I’m curious if anyone else has tried playing with TwinAI?
It’s like OpenClaw, but runs in the cloud and offers a whole bunch of Personal AI and agentic services. All from within a conversational interface, with very little setup or config.
It’s quite impressive.
The demo video by Tom Crawshaw is worth a watch. It’s part-inspiring, part exhausting. I recommend you check it out, even if it’s only the first 7 or 8 minutes.
It’s a real glimpse of what’s coming with Personal Agents.
Done that yet? Good.
Can you guess my main problem with it all? Yup, it’s the trust gap. Specifically, around giving AI agents your login and access details.
To get many things done, these AI agents like TwinAI are going to need access to your accounts, your data and stuff on the web. And just like humans, they’ll need to login, to authenticate, to prove they have permission.
Right now, that needs a ‘human in the loop’. Quite rightly. To provide a protected access code or password.
But that’s the very snag. It’s going to get pretty boring - and frustrating - approving every single one of these login steps.
We’re about to get “Approval Fatigue”.
Where humans become the bottleneck in the very systems designed to operate at machine speed.
Right now, we have two paths to design our AI agent approvals future:
Option 1: Impersonation
We give the AI agents our access details. The agent pretends to be us. But there are at least two problems with this approach:
Fraud: We very quickly trigger an explosion of injection attacks, bad actors stealing AI credentials, and a tsuanmi of downstream security issues. How many of these AI platforms have solid governance, privacy or security? Who knows where our data, including logins, ends up?
T&Cs: Most companies don’t allow you to give your account details to someone (or something) else for the same reason. We quickly fall foul of the service terms we signed up to, and can get kicked out. It’s one of the main reasons why Amazon recently blocked Perplexity AI Agents from using the Amazon.com marketplace website, citing ‘computer misuse’.
Option 2: Delegated Authority
Instead, we give AI agents their own digital ID and access tools.
It’s actually quite obvious when you spend time looking at it, but it’s harder than it looks. We barely have good ID solutions for people, let alone AI agents.
Rather than give these AI platforms your passwords and access details, we give them an identity token, some ‘cryptographic proof’, that shows that you have allowed this particular agent to do something. That the AI agent can act ‘on behalf of’ you.
This approach flips the risks completely. It means
We can see what the AI agent did, when, how and where it showed up
The website or service can know that it’s an AI Agent, and that it’s working on behalf of me (this can enable smarter workflows, different security checks, and even a better ‘customer experience’
We can now attach a durable reputation to these AI agents, helping them (and us) build up trust over time, just like we do with customers
So far so good.
But it’s worth pointing out that this is all just a stopgap of course. Getting AI agents to use the existing human channel because there isn’t another option for them.
Yet.
Soon, AI systems will get their own ‘agentic customer channel’, using MCP and similar protocols. (Side bar: if you haven’t yet, check out WebMCP, a new way for AI agents ‘to talk’ with your website… it’s fascinating stuff).
But back to approval fatigue. If you haven’t yet watched it, go check out the demo video above about TwinAI. It will open your eyes to what’s possible, and what’s coming soon with AI agentic services.
Remember, what these early tools reveal most is the biggest gap: between agent capability and human control.
I’m assuming the coming wave of approval fatigue will be much worse than today’s ‘consent fatigue’ with cookie popups. And it’s why giving AI agents their own identities and access tools will be critical to making them work at scale.
So it’s becoming urgent that we define common ways for us to identify AI agents, handle their intent and proof of delegation, and put in place common governance rules so our new powerful digital assistants can get on with the job without having to ping us every 10 seconds asking for something.
Will you give your car keys to an AI agent?
Digital identity for AI agents is going to be complicated.
Not because it’s technically impossible. But because AI agents won’t just work for people and organisations. They’ll also work for other AI agents, too.
Your main ‘coordination agent’ is very likely to handle different tasks by sub-delegating them to other AI agents, platforms and systems. My TwinAI example above already shows us how this will work.
So let’s unpack it a little. And why it’s going to be such a challenge.
We already have a model for human ‘delegated authority’. Where we pass responsibility and authority to someone else. Maybe that’s another employee, or a lawyer, a doctor, or an accountant.
But now imagine you give someone else your authority to act on your behalf… but they then pass that authority on to a lawyer, who passes it on to another lawyer, who passes it on to an accountant.
What happens then? How can each of them prove who was authorised to do what?
I love Andor Kesselman’s example of the car key:
“If you give your car key to someone, who then gives it to someone else, who gives it to another person, and someone crashes your car, you can’t prove who did it because every access to the car used your key.
“It also means you remain accountable. Without distinct agent identities, delegation chains aren’t traceable, auditable, or debuggable.”
Like I said. It’s going to be complicated.
Today’s identity infrastructure, especially the systems we rely on for login and authentication, just won’t work in these complicated cases. Frankly, as we saw above, they won’t work in the ‘simpler’ case of giving them your access details and passwords either.
The excellent Andor Kesselman has written a longer piece about it all. Breaking the problem down, and using familiar examples:
“Think about how you share a Google Doc. You can add specific people to an access list, or you can use “Anyone with the link”.
“The first option gives some control but requires manual approvals. The second scales effortlessly but gives no control over who the link gets passed to.
“This captures two fundamental access models:
Identity-based, which grants access based on who you are
Capability-based, which grants access based on what you possess.”
Naturally, most folks are trying to retrofit yesterday’s delegation technology - which is mostly about approach 1, identity, and who you are - into the emerging new model for AI agents.
But it’s throwing up a whole bunch of problems. Here’s how the Decentralised Identity Foundation (DIF) see it:
The human-in-the-loop
Password managers are already building AI agents so that they can automatically complete browser logins, sharing the user’s details without exposing them to the agent. Yes, it works. But it still requires human approval for each access.
What happens when there are hundreds of autonomous agents, many crossing security boundaries? Or if they are delegating permissions to sub-agents? How will approvals work at scale? And we’re back to Approval Fatigue.
Missing attribution
When different agents operate under the same account, they can become indistinguishable from each other.
You will be able to see that an MCP server acted on behalf of alice@company.com’s token, but you can’t identify which agent was compromised, or investigate how.
If there’s an issue, we can revoke their access. But if there’s only one token being passed around, won’t it touch multiple agents? So do you keep all the agents connected, or shut them all down?
Agent lifecycles
Agentic systems frequently spawn short-lived agents for specific tasks. They might exist for 15 minutes, then terminate.
But today’s identity tech (like OAuth) assumes these technical relationships persist over time. For short lifespan AI agents, you either need to grant overly broad access, or create individual authorisations that need ever-more human approval. Neither approach is ideal. In fact, both options create more issues.
So here’s the point (if it’s not already clear from all these posts).
We’re going to need new tech. New identity infrastructure. And new ways to delegate to AI agents.
DIF summarises it like this:
“It’s going to require a fundamental mental model shift in how we think about AI agent identity, and how we handle authorisation.”
Regular readers will know that I’ve been talking about the idea of OBO, or ‘on behalf of’, for AI Agents for a while now. The idea of portable, verifiable authority.
So we can all know who - and what - is approved to do what. And critically, as Andor and DIF point out, we can use OBO to describe the limits of those AI agent actions, as well as look back and audit them.
We can finally understand AI’s limits of what, where, when, how, and with whom.
The Decentralised Identity Foundation has set up a new working group on all this, called - you guessed it - Trusted Agents. Pleasingly, the same name of our new advisory firm!
If you want to get up to speed on all this stuff fast, drop us a line at Trusted Agents. We’re running a whole bunch product and exec briefings at the moment, and can easily loop you in.
Would you give your car keys to an AI agent? Me neither.
But unless we fix our digital identity infrastructure for us all - including people, businesses and yes, AI agents, then we might have to.
So let’s hope they don’t crash the car.
DIF’S VIEW ON AI AGENT IDENTITY, TRUSTED AGENTS
OTHER THINGS
There are far too many interesting and important Customer Futures things to include this week.
So here are some more links to chew on:
News: Meta told to pay $375m for misleading users over child safety READ
More News: Meta and Google found liable in landmark social media addiction trial READ
Article: The 5 Rules of Agent Lifecycle Identity READ
Post: 10 things we learned building for the first generation of agentic commerce (Stripe) READ
Idea: WhatsApp just hit 3 billion users, but travel companies are still sending PDF quotes by email READ
Research: The evolution of AI Agent Identity Approaches (Centralised, Enterprise, and Distributed) READ
Opinion: You can have data sovereignty, or you can have cutting-edge AI… pick one READ
Post: Checkout.com is rebuilding the payments stack for the AI era READ
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at LinkedIn.
And if you’re not yet signed up, why not subscribe:

