Customer Futures Stories: What's in a digital wallet, the Great AI divide and spying on kids
Plus: Getting intimate with computers, a new fake reality and the advertising data crisis
Hi everyone, thanks for coming back to Customer Futures. Each week I unpack the ever-increasing and disruptive shifts around digital customer relationships, personal data and customer engagement.
This is the weekly STORIES edition. Covering the important, and sometimes less obvious, updates from the market. If you’re reading this and haven’t yet signed up, why not join hundreds of executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
STORIES THIS WEEK
Another packed week in the world of being a digital customer. While everyone navigates the firehose of AI news, I’ve pulled together a few stories that will impact the future of digital customer engagement.
Things that I thought might take a number of years, are happening in months…
In this week’s edition:
From fake news to fake reality
What’s in (and what’s out?) of a digital wallet?
The Great AI Divide
What if the issue with personal AI isn’t privacy - it’s intimacy?
… plus other links from around the web you don’t want to miss
Let’s go.
From fake news to fake reality
For a while now I’ve wanted to make a prediction. How long before:
A video clip goes viral of someone famous, perhaps a politician, doing or saying something fake… and it triggering some crisis or other. But then…
It taking hours, even days, to confirm if it really happened.
I now believe this will occur by the end of 2023, for two reasons:
This week I watched a clip of CNBC anchor Brian Sullivan interviewing an AI version of himself. And it’s pretty good. What happens when he interviews a fake celebrity and we can’t tell?
Yesterday I was pointed to the latest from AI company Hour One. They provide on-demand animated humans who can, in a language of your choice (60 of them!), present whatever script you want.
Mind-blowing? Yes. Still imperfect and detectable? Yes.
But improvements will come. And those changes will compound weekly. So how long until it can be personalised - to someone’s face, to your face? To your voice?
Just point the AI to an existing video of you (which is already everywhere), and give it the words to speak.
It just happened to Michael Schumacher. So why not a president announcing a coup? When (not if) this happens, it will be a Big Deal.
Once more, for those at the back: In the age of AI we’re going to need a new way to trust what we’re seeing and hearing.
Why will this matter to digital customer relationships? Because computers aren’t people. And we’ll need to differentiate.
Because Fraud. Because Hackers. Because Trust.
When finally I’m able to point my Personal AI at a business to get things done, the company will need to know that it’s permissioned and legit, and not a fake bot.
My take: From now on, anything that’s not face-to-face should be Digitally Guilty Until Proven Innocent.
What if all digital objects - about people, places, events, things or organisations - could have a trusted digital watermark?
Could it become a simple way to check what or who we’re dealing with?
Tech people call this a ‘digital signature’. And we put these watermarks/ signatures somewhere trusted, so we know that the watermark itself hasn’t been faked. It could be on your device, or even in your (own trusted) cloud.
For those paying attention - that’s precisely what digital wallets do. They hold digital signatures.
Assets. Passes. Proofs. Verifiable Credentials.
We’ll be able to watermark - cryptographically ‘sign’ - objects using a digital wallet.
It turns out that digital watermarks might just help us navigate this disruptive customer shift: from fake news, to fake content… to a new fake reality.
Put simply, we’re going to need digital wallets and other Customer Digital Tools to handle the inevitable explosion of fake digital everything.
Which leads us to ask…
What’s in (and what’s out?) of a digital wallet?
Some wallets will be for Web3 assets, like holding ‘crypto’ (Bitcoin and other tokens). Others will be built as Web5 wallets holding Verifiable Credentials. A number of central banks also plan to offer new digital wallets to hold their Central Bank Digital Currency (CBDC).
Each of these opportunities is absolutely massive, and book-worthy in its own right.
But critically, and to be trusted for certain transactions, many wallets will need to hold an ‘anchor’ identity credential. Something to prove that it’s you, and that it’s your digital wallet.
This could be sourced from a government document, or issued by another trusted organisation or group like your employer.
But there’s another, new digital identity game in town: a government-issued identity credential, that lives directly in a government-approved digital wallet.
The most high-profile example of this approach is the proposed EU digital identity wallet (to be made available to the region’s 450M citizens). The scope is still in flux, though this video is an excellent summary of what’s likely to be in it.
Exciting times. Yet the introduction of new tech always brings some level of push-back. Some levels of concern.
A bit like the early opposition to the automobile (dangerous! costly! unregulated!), and this week a very large group of EU banks and payment providers started lobbying to remove payments from the scope of the EU ID wallet.
Why? They argue that identity wallets will introduce new cost burdens and liabilities for banks, and new compliance issues for merchants.
They may be right. But these adoption challenges will need to be weighed against the huge economic opportunities that promise to be unleashed.
The case for wallets will shape-shift over time.
Perhaps - and given the story above about a new fake reality - wallets may also prove to be a new digital necessity. Like wearing seatbelts: an inconvenience at first, but we get used to them because they make life demonstrably safer.
A more interesting question, then, becomes: will these digital wallets end up creating new walled gardens, potentially worse than today’s ‘web2’ platforms?
Enter stage left, the Open Wallet Foundation, formed precisely to foster digital wallet interoperability and standards. An attempt to collaboratively manage the new ‘wallet wars’ (or more precisely, the coming ‘ecosystem wars’.)
Here’s a bonus thought: why stop with verifying data and content? From the consumer’s point of view, what if we could watermark our digital relationships too?
We could not only verify the data we receive, but also the trusted connections we receive them from. We could instantly know who is calling, for example.
This idea of digital watermarking, and verified connections, is why recently I argued that digital wallets will become the new ‘customer account’. And why soon they will open up a new 5th channel for customer engagement.
Digital wallets (and their customer engagement sibling ‘smart agents’) are going to move from being a nerdy (crypto) sideshow, to becoming centre stage of the new digital economy.
What’s in and out of these personal digital wallets is going to matter a great deal.
The Great AI Divide
I often talk about the ugly underbelly of the ‘digital divide’: the ‘data divide’. Separating the data-haves from the data-have-nots.
Dividing those with vast digital footprints and a world of digital convenience, seamless experiences and lower prices… from those without digital devices or rich digital histories. Endlessly locked out from critical parts of the economy like democratic representation, credit scoring, lending or even life-saving digital health interventions.
It’s a significant problem, and getting worse. Worth following the ever-excellent Ada Lovelace Institute on this, which did some important work on the Data Divide during the pandemic.
This all came up in a recent interview with Ex-Meta & Ex-Google privacy exec Rob Leathern. He explores how this will all impact the effectiveness of AI, and asks important questions about AI ethics and equity of access.
Because fast forward and we’ll soon be wrestling with the AI divide: between the AI-haves and AI-have-nots.
This will be an uncomfortable split between those with AI-powered devices and enough money to pay for smart personal agents… and a new AI underclass, humans without representation, without an (AI) voice.
And to make matters worse, we already know that generative AI is biased. Against certain cultures, religions and skin colours. (It’s worth checking out that link for a real wake-up call).
Put these together and you’ll see that our Personal AIs will have both an equity problem and a bias problem. By being clear-eyed about these issues we can begin to fight back. To course correct.
But when the tech moves faster than we can pay attention, it will be like fighting with one hand tied behind our back.
What if the issue with personal AI isn’t privacy - it’s intimacy?
Now is a good time to go back and watch the 2013 movie ‘Her’. Set in the future, it tells the story of a man who develops a relationship with Samantha, an AI virtual assistant.
No spoilers here, but it raises interesting questions about what ‘intimacy’ with a computer means. And how computers can - and will - shape our emotions.
If you want a peek at this future, watch this recent short video on how we’re well on our way to that virtual assistant reality.
Once computers become our trusted assistants, with recommendations and helpful suggestions, we’ll soon trust them unthinkingly.
Just like we trust the directions we get from Google Maps today.
Those recommendations will only become more valuable when they account for our moods. How we are feeling. And how that changes what we want and why.
Personal AIs will become part of our emotional lives faster than we think.
We’ll soon care as much about ‘AI intimacy’ as much as we care about AI privacy.
Social networks have known this for a long time (and Apple too). I’ll bet you’ve already experienced the ‘here’s a showreel of old photos’ feature. It’s sometimes lovely. Sometimes sad.
But without thinking about it we are being steered easily and invisibly into different emotional states.
Behavioural economics (and ‘nudge theory’) has become an important part of designing products and systems: how people make decisions, and how humans are predictably irrational.
What happens when you put these two things together: AI intimacy + emotional steering?
What happens when a personal AI becomes capable of soliciting a target emotion… and what if that target emotion is fear, uncertainty, sadness, or feelings of isolation?
Transparency of our AI tools - especially those closest to us - has never been more important.
OTHER THINGS
As ever, there are far too many interesting and important Customer Future things to track. Here are some of the things I’m following:
The online advertising data crisis by Johnny Ryan WATCH
Will we rent our human identities to AI artists? READ
Dame Wendy Hall on the future of data control, AI ethics and protecting the digital planet WATCH
How To Make a Child-Safe TikTok… have you tried not spying on kids? READ
When your digital society’s entire basis is built on identity, you will not have privacy READ ($)
Cheqd releases a new way to establish a trusted, verifiable, digital reputation called ‘Creds’ READ
Thanks for reading this week’s edition! If you’ve enjoyed it, and want to learn more about the future of digital customer relationships, personal data and digital engagement, then why not subscribe: