Sam Altman just took down the electric fence at Jurassic Park, and there’s a (customer) elephant in the room about to roar
Plus: Brands are now promoting 'AI free' experiences, and no, we’re not trusting autonomous AI Agents any time soon
Hi everyone, thanks for coming back to Customer Futures.
Each week I unpack the disruptive shifts around Empowerment Tech. Digital wallets, Personal AI and the future of the digital customer relationship.
If you haven’t yet signed up, why not subscribe:
Hi folks,
Wow, it’s mad out there.
This week, I’ve had some brilliant conversations with folks at the cutting edge of Empowerment Tech. On AI Agents and delegation, digital wallets and portable reputation. And using rich customer data for ‘extreme personalisation’ (their words).
I’m feeling three different things all at once.
First, I’ve never been more excited about the digital opportunity to empower individuals.
I’ve been working on this market - from data stores and data portability, to digital wallets, trust and digital assistants - for over 15 years. And it now feels closer, and more important, than ever.
Second, I don’t think folks realise how bad it’s going to get before it gets better.
We’ve blown past the Turing Test for audio and video without much fanfare. We can’t trust what we see or hear anymore. It’s quite ridiculous, amazing and terrifying all at once. Fraud is about to explode. And it was already a tsunami inside a tornado.
Digital safety is quietly being broken in the background while we argue about job losses.
And third, it’s hard to appreciate not only how fast it’s all moving, but how all this disruption is compounding.
It might be difficult to see it, but if you pay attention, you can feel it. A new model every week. A ridiculous new feature every day.
Overwhelm.
And the same folks I’m speaking to - those working at the cutting edge - can’t keep up. The analysts - the very folks paid to pay attention to what’s going on - can’t keep up.
Here’s the problem: As humans, we only think in straight lines. We look for logical, next-step patterns. Videos get a little better. Audio gets a little more realistic. Agents can start to check-out.
Linear progress is your brain’s default. With thousands of years of evolution as training data.
But AI disruption is happening on a curve. We are living through a real-time exponential. Every breakthrough feature and each piece of digital infrastructure builds on the last.
1, 2, 4, 8, 16, 32, 64, 128, 256, 512.
Ten steps. But we accelerate from 1 to 500.
It’s why Empowerment Tech will happen in two ways. Gradually, then suddenly.
It’s time to pay attention, people. The opportunities are enormous. The threats are real. And it’s all happening on a curve we can’t see.
Empowerment Tech is now coming. Perhaps in fits and starts. But it’s coming. I’m just hopeful it’ll arrive faster than we believe, and sooner than we need.
It’s all part of exploring the future of being a digital customer. So welcome back to the Customer Futures newsletter.
In this week’s edition:
Sam Altman just took down the electric fence of Jurassic Park
Nope, we’re not trusting autonomous AI Agents any time soon
That happened fast: brands now promoting experiences free of AI
There’s a (customer) elephant in the room - and it’s about to roar
… and much more
So grab a brew, a comfy chair, and Let’s Go.
Sam Altman just took down the electric fence at Jurassic Park
First, you need to read the latest announcement posted by Sam Altman, CEO of ChatGPT.
It’s long, so I’ve cut some parts back to make it easier to read. Bold mine.
“Today we launched a new product called ChatGPT Agent. Agent represents a new level of capability for Al systems and can accomplish some remarkable, complex tasks for you using its own computer.
“It combines the spirit of Deep Research and Operator, but is more powerful than that may sound—it can think for a long time, use some tools, think some more, take some actions, think some more, etc.
“For example, we showed a demo in our launch of preparing for a friend's wedding: buying an outfit, booking travel, choosing a gift, etc. We also showed an example of analyzing data and creating a presentation for work.
“Although the utility is significant, so are the potential risks.”
“We have built a lot of safeguards and warnings into it, and broader mitigations than we’ve ever developed before from robust training to system safeguards to user controls, but we can’t anticipate everything.
“In the spirit of iterative deployment, we are going to warn users heavily and give users freedom to take actions carefully if they want to. I would explain this to my own family as cutting edge and experimental; a chance to try the future, but not something I’d yet use for high-stakes uses or with a lot of personal information until we have a chance to study and improve it in the wild.
“We don’t know exactly what the impacts are going to be, but bad actors may try to “trick” users’ AI agents into giving private information they shouldn’t and take actions they shouldn’t, in ways we can’t predict.
“We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks.
“There is more risk in tasks like “Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow up questions”. This could lead to untrusted content from a malicious email tricking the model into leaking your data.
“We think it’s important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve.”
My goodness, it’s a bold step.
To unleash the untested stuff first, so we can find out what breaks and then fix it?
What’s most interesting is that he’s not worrying about errors in a report. Or the risks of ordering the wrong shoes. He’s specifically calling out the risks around the malicious collection and leaking of personal data.
I genuinely think this might be one of the moments we look back on and realise that we needed Empowerment Tech much sooner, and at scale.
Why?
Because we don’t yet have:
Portable digital credentials to prove who can do what - especially unsupervised AI agents that can act independently
Personal AIs that can help the individual make sense of inbound data requests, new levels of fraud attacks, phishing, data collection overreach, and unmonitored delegation
Digital wallets that can sign everything, so we can know where the agents came from, and which data sources are which
But let’s pause. Let’s look at what this AI platform can now do.
PSE Consulting recently shared a post showing the new ChatGPT Agent user experience, and it’s pretty interesting. Here’s what they found:
It is still pretty slow. It does not seem to save much time vs a human operator
It makes LOTS of mistakes
It does not have log-in or payment information - the user has to take over in order to log-in and check-out
It sometimes gets stuck and has to be rescued
All fair points. But once again we’re missing something.
Yes, this is a clever AI agent using a browser. But that browser was designed for humans. It’s like building a humanoid robot and then teaching it to drive a car. Rather than giving the car autonomous features itself.
It was always going to happen this way.
Because with every new technology, the first thing we do is apply it to old problems.
We first used the steam engine to replace mules in the mines. And we first used the world wide web for ‘e-magazines’ and trying to stream radio. So the real AI disruption will take a few cycles yet.
But it’s coming.
Here’s where I’m at on all this ‘OpenAI unleashing the tech’ drama.
First, A-Commerce has always been inevitable.
Autonomous bots acting on our behalf. Finally addressing the data-rich, complex routines and headaches that we’ve always just put up with. Like price comparison and ordering stuff. Like dealing with fragmented contact centres and channels.
We’re just here a little bit faster than folks expected.
Second, we’re starting the shift by applying these new AI tools to the existing human interfaces.
We’re just taking a simple, tiny, first, clumsy step. The real shift comes when businesses throw up an AI-friendly interface. We won’t need to bother with this ‘watch my bot search, scroll and check-out for me’ nonsense.
Third, and I can’t stress this enough, the barrier here is digital trust.
Specifically, we are going to need our agents to have delegated authority. Strict boundaries on what they are - and are not - allowed to do. And we’ll need new ways to ‘authenticate’ the Good Bots. So that businesses can know that it’s my agent doing the booking, and that it’s trusted to act on my behalf.
Which brings us back to OpenAI.
Who are now letting these AI agents loose on the web, and the internet beneath it. Without governance. Without guardrails. And without audit, security or privacy.
But it’s OK folks. Because Sam Altman tells us on X (Twitter) that there are ‘significant risks’. And this way we’ll find out the problems - and how to fix them - the fastest.
Righto.
It does feel a bit like Sam is sounding the alarm, while at the same time taking down the electric fence at Jurassic Park. Yes, we’re about to see some magnificent flying beasts and incredible animals. But the T-Rex is also loose now.
And all in the same week that Sam Altman points out that there’s no legal confidentiality when using ChatGPT as your therapist.
This is all a Black Mirror episode, right?
SAM’S POST, CHATGPT LEGAL PROTECTIONS
Nope, we’re not trusting autonomous AI Agents any time soon
While everyone gets excited about agentic payments and travel booking, let's take a breath, shall we?
A new research paper is out, presenting pretty good evidence about what we always knew... that autonomous AI agents:
Attack
Violate
Leak
Bypass
It’s called ‘Security Challenges in AI Agent Deployment: Insights from a Large Scale Public Competition’.
But the bigger issue? AI Agents are doing it systemically.
Yes, A-Commerce is coming. And yes, we'll need Know-Your-Agent (KYA). But yes, we're also going to need new agentic ‘GRC’ tools too.
Governance, Risk and Compliance.
New liability models. New ways to trust. New frameworks. New audit paths. And new rules to handle digital transactions.
AI Agents are far from being ready for prime time. So you’re going to give it your credit card, yeah?
That happened fast: brands now promoting experiences ‘free of AI’
One of the quiet giants of experience design, Jared Spool, writes:
“Last year, I predicted that over the next 2 years we’d start seeing promotions for AI-free experiences. I just saw my first one.
“United is now advertising their customer service app is a human, not an AI.
“People don’t want to be handled by machines. Nobody likes being on the receiving end of AI. Businesses are about to learn this lesson in a big way.”
And we’re back to this thing about Good Customer Friction vs. Bad Customer Friction.
While we spray AI mess over all-the-things, we’re not really thinking about which steps can be removed, and which should actually be made more human.
We’re forgetting about the experience. The human outcomes. We’re automating all the things, because we can, not because we should.
The new competitive advantage?
Being human. And designing for people.
Maybe my AI agent will be automating much of my day. But it’s going to be me that’s doing the ‘engagement’ with the brands.
So who wants my attention? And who’s designing for good friction, not just automating away the digital baby with the bathwater?
There’s a (customer) elephant in the room - and it’s about to roar
Benedict Evans is one of those fascinating tech analysts.
He started in mobile, became a leading analyst at renowned investment firm a16z, and he’s now one of the go-to authorities on tech trends. Able to join the dots and cut through the noise like few others.
Often spotting the missing story. The things most of us can’t see.
Which is why I was delighted when he spotted our missing story. For once, he’s bang on the (Empowerment Tech) money:
“Apple, Google, Meta and Amazon each want to analyse what you do and understand you better with LLMs, so that they can solve things for you in new ways… but they each have a very partial view of what you do.
“iOS doesn’t know why Instagram showed you that picture and Amazon doesn’t know what else you bought. They’re like the blind men feeling an elephant.”
Amen to that.
Because the only 360 degree view of the customer… is the customer themselves.
Perplexity has recently boasted that their ‘moat’ is personalisation. Understanding the customer better than anyone else. Fine, but that requires a LOT of personal data and context.
Is that desirable? Is it even possible? What happens when OpenAI, Grok, Anthropic and all the others say the same thing?
The economics of AI Agents suggest that digital assistants must form a fragmented market, right? 8bn people can’t all jump into one platform, can we?
Is this winner-takes-all?
The likely destination: Rather than all these powerful centralised models doing it all, it’s more likely that I’ll reach out to a specialist LLM to help me with my specific outcomes. Specific ‘jobs to be done’.
Buying a car. Dealing with a chronic health condition. Managing my calendar. Filing taxes. And yes, planning a holiday.
Look at it the other way round. Businesses don’t say, “great, here’s one master LLM platform to handle every department, every task, every product, and every customer”.
Rather, they bring in what they need, for the tasks they need, where it makes sense. Where there is advantage. Where there is specialism. And where there is trust.
Customers should be expected to do the same. Relying on different specialist platforms and AI tools for the things they need. When they need it.
But that’s also going to get messy. Individuals will need their own ‘coordination bots’. Their own Personal AI tools to help manage all this.
The good news: we’re going to bring the (AI) mountain to (the customer) Mohammad. And all this intelligence is moving to the edge.
To me.
Why?
First, because while the power of LLMs is exploding, the size and cost is collapsing. It’s already possible to run decent AI models and agents on my device.
Second, and more importantly, it’s the personal data, stupid. All these AI tools are going to need lots and lots of training data.
And who has all that data?
Me, myself and I. At the edge. On my device.
Benedict is right to point out this massive market gap. That the BigTechs are really just a group of eight blind men feeling an elephant. Each thinks they are standing in front of something different. One thinks it’s a tree trunk (the leg), another feels a hose pipe (the trunk), and another thinks it’s a wall (the side).
No one can see the elephant.
And today, no one can see the empowered customer. With their own Personal AI.
tl;dr - these big companies can’t win because they know everything. They’ll win because they’ll help me with specific things.
On my terms. With my data. When it makes sense to share it. And likely at the edge.
There’s a (customer) elephant in the room.
And it’s about to roar.
OTHER THINGS
There are far too many interesting and important Customer Futures things to include this week.
So here are some more links to chew on:
News: 6,000 porn sites start checking ages in the UK READ
Post: Moving from guesswork and risk scores to verifiable guarantees READ
News: McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Who Tried the Password ‘123456’ READ
Call: UK Government call for Evidence on a Digital Markets Smart Data Scheme READ
Article: A Monkey in the Kitchen: Why Agentic Systems Demand Agentic Governance READ
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at LinkedIn.
And if you’re not yet signed up, why not subscribe: