Customer Futures Stories: Google reimagines search, hacking your voice and privacy on the chopping block
Plus: You won't need to read T&Cs, why digital wallets will be 'more than money' and can we trust AI
Hi everyone, thanks for coming back to Customer Futures. Each week I unpack the ever-increasing and disruptive shifts around Personal AI, digital customer relationships and customer engagement.
This is the weekly STORIES edition. Covering the important, and sometimes less obvious, updates from the market. If you’re reading this and haven’t yet signed up, why not join the growing number of senior executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
STORIES THIS WEEK
The impacts of Personal AI will be felt everywhere. From ‘simple’ things like reading terms and conditions to more complex things like online search, fraud, payments, travel and health.
However, as exciting as AI gets, none of it will reach its full potential, or even be trusted, until we can solve the thorny issues around AI ethics and regulation, digital privacy, digital identity and the sharing of personal data.
Or to be honest, even proving we’re dealing with a verified human, let alone one that’s the right age.
Some of this will be about regulation. Some will be about shifting cultural norms, as happened when the personal camera was introduced into society. But much will be about earning ‘digital trust’. Or rather becoming verifiably trustworthy.
It’s an increasingly important theme in the future of being a digital customer. From new business models and ‘deep tech’ to new digital customer experiences, it’s all up for grabs as AI accelerates us into a new digital age of faster, smart, cheaper and ever-more connected.
But better? More valuable? More personal?
It’s all moving quickly. But boy is it interesting. Welcome back to the Customer Futures newsletter.
In this week’s edition:
Can we trust AI?
Online age verification is coming, and privacy is on the chopping block
Google reimagines search - but what does it mean?
On Wallets & Wallies
Meta has a big problem - and it’s not the $1.3 Billion fine for violating E.U. data privacy rules
The best way to govern AI is to emulate it
What if our AIs can read T&Cs for us?
Digital Euro seeks ID scheme
… plus other links about the future of digital customers you don’t want to miss
Let’s go.
Can we trust AI?
From the relentlessly sharp Nate Kinch on trusting AI:
“You cannot control trust. Trust is something someone else does, regardless of the nuance and specifics of the cause and effect. What you can do is operate with the very clear intent to design your algorithms, and the organisations that create them, to be verifiably trustworthy:
They intentionally work in the public’s best interest
There’s an integrity to every deliberation, conversation, trade-off, action and reaction
They are open for interrogation and critical inquiry
They are developed based on meaningful engagement processes that actively include a representative sample of directly and indirectly impacted parties in the design, training and deployment process
The meaningful engagement process you enact helps enshrine justice. It leads to what we might consider fairness
There’s a fundamental respect for all life within the biosphere
The organisation and algorithm actually delivers the value it proposes consistently (competence)
All of the qualities interact with one another in various ways. They are greater than the sum of their parts.” READ
Online age verification is coming, and privacy is on the chopping block
Age verification has become a poster child for digital verifiable credentials. And specifically the idea around ‘zero knowledge proofs’. Because businesses often don’t really need my date of birth. They don’t even need my age. They just need to know if I was born before, or after, a certain date.
But new ‘age estimation’ technologies push this further. Could a business just scan my face and estimate my age range? If they then delete the image, it’s totally private, right?
There are many, many questions here about digital privacy, about social norms, and about how we deal with adult-only spaces online. As a raft of new regulations sweep the digital economy, age verification may just become the lightning rod for those difficult conversations. As The Verge describes it:
“A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.” READ
Google reimagines search - but what does it mean?
Nobody really knows what will happen to online search as it collides with AI. So Newspage asked a group of digital marketers and AI experts how they see things turning out. And what it means for digital marketing. Much is predictable, but the responses contain some very interesting points. As one puts it:
“The irony of AI is that is that it could make search more human. For too long, search has been optimised for bots and not humans. We had to sift though long keyword-rich content at the top of Google when all we really wanted was short, snappy and relevant answers. AI is about relevance and relevance is about knowing the human condition and reading between the lines.
Complicated search algorithms are going to very quickly look out of date compared to AI and may mean that companies may now need to focus more on the quality and relevance of their product rather than the advertising spend.
The more AI knows what we like, the more it will sift out what we don’t like. This could be fantastic news for genuinely useful products and services with a good price point but not good news for products and services playing the numbers game and throwing money at PPC and hoping something will stick. Let’s hope this will give us more of our most valuable resource, time. And less digital noise.”
The critical question here, of course, is ‘How will the AI know what I like?’. Enter the new category of ‘Personal AI’, and the inevitable land-grab “to know customers better than they know themselves”. NEWSPAGE RESPONSES, PERSONAL AI
On Wallets & Wallies
The brilliant “Identity is the New Money” Dave Birch agrees that digital wallets will become way more than crypto tokens and digital money.
As I’ve been describing in this newsletter for some while, digital wallets will also become peer-to-peer tools for identity. And ultimately the main way we’ll communicate with businesses and each other. Because they will be more portable, more private, more secure, and more useful than other digital channels.
Here’s a snippet of Dave’s latest excellent take on wallets:
“My physical wallet is deaf and dumb. But my digital wallet can communicate locally with the software agents that will actually be making most payments (because payments are either too boring or too complicated for people to want to get involved) and remotely with other wallets.
Why would my wallet want to communicate with another wallet? Well, to transfer central bank digital currency (CBDC) offline is a future use case, but even before then think about how payments, identity and credentials will need to work in practice: back again to my point in previous articles of what the “ceremony” is that consumers will accept and expect?
To put it simply: How will I check that the verifiable credentials in your digital wallet are valid? Using my digital wallet, of course! Yes future digital wallets will exchange money, but long before CBDC reaches population scale, digital wallets will be exchanging identity.” READ
Meta has a big problem - and it’s not the $1.3 Billion fine for violating E.U. data privacy rules
There’s good evidence that says a fine is really just a price. Meaning that people will ‘cost-in’ any fines they get as part of doing something that’s against the rules.
It’s precisely what’s happened at Meta, where the latest billion-dollar parking fine is really just pocket change. Their public policy team might even be celebrating that it’s not the $2Bn fine they perhaps ‘costed-in’.
But there’s more to this when you consider the after-effects of complying with the EU data privacy rules:
“Without a deal, the ruling against Meta shows the legal risks that companies face in continuing to move data between the European Union and United States.
Meta faces the prospect of having to delete vast amounts of data about Facebook users in the European Union, said Johnny Ryan, senior fellow at the Irish Council for Civil Liberties. That would present technical difficulties given the interconnected nature of internet companies.
“It is hard to imagine how it can comply with this order,” said Mr. Ryan, who has pushed for stronger data-protection policies.” READ
The best way to govern AI? Emulate it
Scientists at the University of Texas have used AI to decode people’s thoughts from a brain scan. The new system, called a ‘semantic decoder’, can translate a person’s brain activity while listening to a story or silently imagining telling a story — into a continuous stream of text.
It’s mind-blowing. And frightening to think about what’s possible when this becomes more widely available, and in the wrong hands. Elizabeth Renieris unpicks how we need to think about the regulation of these breathtaking technical advancements:
“In general, we can respond to new technologies in three ways. We can apply the letter of existing laws “as is,” effectively ensuring that new technologies remain out of scope (the “law of the horse” problem). We can start from scratch and craft new, technology-specific laws each time, which would effectively require an unlimited supply of resources.
Or, we can apply existing laws in new ways, adapting them to fit the spirit of the law and its objectives. Despite the fact that many existing laws, including the GDPR, are meant to be technology-neutral — meaning that the rules should apply equally irrespective of how a technology works or operates — we often forget in practice to adhere to this principle.
Generative AI tools are built on the shoulders of giants — specifically, we the people. This seemingly powerful or even magical technology was built not from scratch but by harvesting a vast corpus of human-generated data from the historical web and continually retraining and fine-tuning it over time. It is able to advance quickly by continually building on what is already there. We could stand to learn from this when it comes to the governance of AI or any other technology.” GOVERNING AI, BRAIN DECODER
What if our AIs can read T&Cs for us?
At nearly 7500 words, it would take over half an hour to read TikTok’s terms of service. Microsoft would be over an hour (and is over 15k words). No one reads these things, of course. But they matter.
What if AI can help? It’s already excellent at condensing down large, dense texts into more easily digestible bites of information. Here’s an example cookie policy, written by an AI, but for children:
“These cookie crumbs can track you as you move around the internet. When you visit different websites, they can see the crumbs left by other sites and learn a little bit about what you're doing online. For example, they might know that you like cats because you visited a cat website, or that you're interested in sports because you looked at some sports articles.
Now, don't worry! These cookie crumbs are not personal information like your name, address, or secrets. They are more like clues about your interests or the things you do online. Websites use this information to make your experience better. They might show you ads or recommend things that you might be interested in based on what they learn from the cookie crumbs.”
If you game this out, then my Personal AI will just read these terms on my behalf. And then decide - based on the preferences I feed it - if I even want to do business with this company, and steer me accordingly.
Go further - and honestly, with today’s pace of AI innovation this might not take long - then individuals will soon be able to set their own T&Cs. Terms that businesses can automatically read and respond to.
Crazy? Well before CRM came along, the idea that companies could automatically deal with - and even personalise things - for millions of customers all at once was ludicrous. So why not the other way round? Have businesses automatically deal with millions of customers’ own terms?
Zoom out, and you’ll see that most people will actually want the same sorts of things, and share common terms. And standards for ‘Personal Terms’ will emerge quickly. Doc Searls and the ‘Vendor Relationship Management’ team at Harvard have worked on this for years. But with Personal AI now on the scene, it might just happen sooner than you think.
Businesses get to negotiate T&Cs with each other all the time. But of course, taking weeks, and using expensive lawyers. But soon customers will be able to do this too, this time represented by their own Personal AI (likely bundled as part of their monthly AI subscription fee).
It’s time for online terms to work both ways. Unlike the unilateral T&Cs that most online companies offer today. (And which they frankly bury in long complicated documents that people aren’t supposed to read anyway.) T&Cs today are written by lawyers for lawyers. Soon it will be My Personal Terms negotiated by AI, with AI. LONG TERMS, T&C WRITTEN FOR KIDS, VRM PERSONAL TERMS
Digital Euro seeks ID scheme
Call for candidates to participate in the Digital Euro ‘identification and authentication’ workstream. A little odd they are not just saying use the EU’s long-planned own digital identity scheme, eIDAS?
“The European Central Bank (ECB) is inviting relevant experts to contribute to the identification and authentication workstream for the digital euro scheme rulebook. […] The main objective of the workstream is to suggest identification and authentication requirements for the digital euro. The requirements should be guided by the aspiration to offer best-in-class user experience and security, supported by an impact analysis of existing identification and authentication approaches.” DIGITAL EURO CALL, EIDAS
OTHER THINGS
There are far too many interesting and important Customer Future things to include in this edition, so here are some more links to chew on:
Digital Trust in the Age of Generative AI LISTEN
AI machines aren’t ‘hallucinating’. But their makers are READ
Ethical hacker scams 60 Minutes staffer to show how easy digital theft is WATCH
Do I have to take responsibility for what my deepfakes does? READ
The secret list of websites that make ChatGPT smart READ
How VCs formats are different - the Credential Comparison Matrix READ
Jack Dorsey-backed ‘TBD’ Launches New Web5 Toolkit to Decentralize the Internet READ
Thanks for reading this week’s edition! If you’ve enjoyed it, and want to learn more about the future of Personal AI, digital customer relationships and personal data, then why not subscribe: