Customer Futures: Stories This Week
Hi everyone, thanks for coming back to the Customer Futures Newsletter. Each week I unpack the fundamental shifts around digital customer relationships, personal data and customer engagement.
Thanks for all the feedback on the new format. This time I’m splitting out STORIES THIS WEEK (this post, covering the latest market developments) from IDEAS (a weekly perspective, coming soon). Do let me know what you think!
If you’re reading this and haven’t yet signed up, why not join hundreds of executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
STORIES THIS WEEK
Do The AI Genies Work For Us, Or For Them?
This post by John Battelle is a must-read. It’s an excellent breakdown of what I believe is going to happen around ‘personal AI’. John calls them Genies. For years I’ve said they’ll be a kind of new Digital Customer Tool. Yes, the language is clumsy. But it’s early in this new market, and naming usually is. But the ideas - and the implications - are what’s important.
“If generative AI really does mark a moment similar to the dawn of the Internet, or hell, as once senior Big Tech executive recently quipped to me privately, “a moment as big as the introduction of the Gutenberg printing press” – well damn, folks, maybe we should think about taking advantage of that fact, instead of crossing our fingers and hoping the next Google or Apple turns out better than the last one?”
ChatGPT Has a Big Privacy Problem
Everyone calling out the ‘privacy’ issues with AI are bang on. But they are only half right. Because this is also about the huge data protection challenges with training models. The difference between privacy and data protection matters, and many experts will point out that the GDPR text doesn’t actually mention ‘privacy’.
Existing data protection laws are going to apply to AI, just like any new technology. AR and VR will also soon be walking the data protection tightrope. Consent, for example, in AI interfaces is going to be sticky when we don’t know what data has been processed, about whom, where and how. Then consider that the training data may include “publicly available personal information”, as per the ChatGPT privacy policy reminds us. Good luck with the right to be forgotten. Yes, Italian regulators have pulled the OpenAI plug early, but it looks like this is the thin end of the data protection wedge. More to come from other countries, and it’s going to get complicated.
Authorities are now scrambling to provide a coherent policy response. How to protect the vast levels of citizen data now leaking into the AI training data, while trying to avoid dampening data innovation (and the implicit country-level competitiveness). The struggle, of course, is that many regulators are just trying to protect yesterday from last Tuesday.
How We Think About Copyright and AI
The EFF has an important take on how today’s legal frameworks, including copyright, apply to AI. Starting with generative AI: who owns the rights to what? Look deeper, and you’ll see that everything becomes about training data. Nevermind new regulations, we’re going to need a new set of questions to ask.
If an AI is trained on my personal data (biometrics!?), who has access rights (and how) to the insights produced? Soon we’ll be able to reconstruct images from our brains. So could someone force us to share what we’re thinking? What happens when this becomes training data? If I create my own AI application with my own personal data, but using OpenAI, who has copyright? What starts with generative AI art may well become precedent for future legal battles around the use of our personal data.
CBDCs Will Be a ‘Trojan Horse’ for Wallet Adoption and Privacy
Many are getting excited about Central Bank Digital Currencies (CBDCs). Narratives (can we do digital monetary policy work with a blockchain?!) and counter-narratives (what about privacy?!) are hotting up, including conspiracy theories about government control. Much more likely CDBCs will become the midwife to citizen-scale digital wallets, and a new wave of demands for digital privacy. If I can move money around privately, why can’t I move my data the same way?
Apparently $5 trillion worth of CBDCs could be circulating in major economies across the world by 2030, with half tied to distributed ledger technology. Digital wallets, zero-knowledge proofs and other digital privacy tools are going to have their breakout moment once CDBCs arrive on the scene.
Meta using generative AI to create ads - and we’re back to coal mining
Meta is turning its hand to AI-based advertising. It’s only the first innings for the public and commercial applications of AI, so perhaps this was predictable. But this is like using the steam engine down the mine, to replace mules. When any new tech is first discovered, we put it in old places. Meta's ads AI is a donkey trying to move some coal. Perhaps it’s worse though, because it can only end one way: hyper-personalised ads, with rocket-fuelled persuasion techniques. Frightening.
We must be thoughtful here. Especially when we’ll soon be applying AI to our own data. Technology must do things for us and with us, not to us.
When AI collides with digital trust: Mayor starts legal bid over false bribery claim
ChatGPT has already been proven to make stuff up, like a fake sex offence against a teacher. So we’re now seeing the inevitable backlash, when someone impacted by a so-called ‘AI hallucination’ starts a legal bid over a false bribery claim.
We’re reaching a digital trust tipping point. It’s going to get harder to verify sources. The result? Fake audio and video everywhere, just like we saw this week in Venezuela, with fake pro-government news videos circulated widely. Brace yourselves, it’s going to get messy when AI collides with digital trust.
AI Steve Jobs meets AI Elon Musk
This is a super interesting example of what happens when AIs - unprompted and unscripted - interact with each other. Or more interestingly, when those AI are trained on real humans (Jobs and Musk), and they do the talking so we can hear what’s going on.
Fun demo, sure. But as Aza Raskin and Tristan Harris say in the AI Dilemma video, “this is the year when all content-based verification breaks”. To be clear, this isn’t about worrying about fake product reviews anymore. This is about synthetic humans who look and sound like you turning up at your virtual bank branch and emptying your account. When we can’t tell real from fake, it’s game over. The customer future is going to need new digital trust plumbing.
OpenAI releases AI safety paper - and there’s a massive gap
OpenAI has just released a new paper on its approach to AI Safety. Protecting children, respecting privacy, improving factual accuracy. These are all critically important goals, but there's a huge gap. OpenAI, Bard and all the others can't predict how the AI will behave, nor how it will be used. But once it's deployed and we can see the impacts, it'll be too late.
This is the double-blind problem with AI (it's called the Collingridge Dilemma):
We can't easily predict the impact of a technology until it is extensively developed and widely used
We can't easily control or change the technology once it has already become entrenched
OpenAI says that they "…work to remove personal information from the training dataset where feasible... …and minimize the possibility that our models might generate responses that include the personal information of private individuals." But there's much more to be done, especially when many of the big techs are laying off the AI ethics teams (looking at you Meta, MSFT, Google). I’d argue that it's even worse than that, because we can't even see the problem.
RESOURCES
Some other Interesting Things related to the future of digital customer relationships:
IDTech Product Market Map READ
eIDAS 2.0 & the EU Digital Identity LISTEN
Building SSI Products: A Guide for Product Managers WATCH
New South Wales Government appoints MATTR as technology partner for Digital ID READ
The AI backlash is here… It’s focused on the wrong things READ
25 ‘Generative AI Agents’ have an unprompted Valentines Day party READ
Thanks for reading this week’s edition! If you’ve enjoyed it, and want to learn more about the future of digital customer relationships, personal data and digital engagement, then why not subscribe: