Customer Futures Stories: The risks of adopting verifiable credentials, how AI will make customer experiences worse and the road to Ubiquitous Digital Identity
Plus: Big Tech isn’t prepared for AI’s next chapter, how Consumer Data Rights are driving innovation and the sweeping implications of the EU’s AI Act
Hi everyone, thanks for coming back to Customer Futures.
Each week I unpack the ever-increasing and disruptive shifts around Personal AI, digital customer relationships and customer engagement.
This is the weekly STORIES edition. Covering the important, and sometimes less obvious, updates from the market. If you’re reading this and haven’t yet signed up, why not join the growing number of senior executives, entrepreneurs, designers, regulators and other digital leaders by clicking below.
To the regular subscribers, thank you. 🙏
📖 STORIES THIS WEEK
Customer engagement isn’t the problem. Customer disengagement is.
When interactions with a business are designed to collect as much information as possible, customers start avoiding it. Or providing fake information.
Garbage in, garbage out. It all messes up the CRM systems and customers insights platforms. The very tools businesses need to serve their customers in the first place.
And people can tell when a company is trying to ‘lock them in’, or to keep them on the page, or to keep scrolling. When the business measures are ‘engagement’ and ‘session length’.
It breeds distrust. It breeds resentment. It’s driving customer disengagement.
And it’s all because businesses are being designed inside out. They are optimising for transactions. Prioritising the sale - the basket size, the profit per customer - over everything else.
They are forgetting the customer and her context. They are forgetting to listen. Ultimately they are forgetting about the customer relationship. The ‘R’ in CRM.
But.
Designing for trust. Data portability. Smart agents. User-centric experiences and technologies. Decentralised Identity. Privacy by design. Transparency and control over user data. Personal data regulations.
These are all movements gathering pace. Technologies maturing. And shifts happening over on the customer side.
Together with web3 - itself triggering a global conversation about ownership of data - these shifts are all coming together at the same time. Emerging as new ways to work with and for the individual. Rather than do things to them.
Regular readers will know I call this the ‘customer stack’.
Once you pay attention to customers having their own tools, you’ll see the vast opportunity - and billion-dollar market - for reimagining customer engagement.
This isn’t (only) about customer control of data. It isn’t (only) about privacy and security. It’s about organising information around people themselves. Not just businesses.
Making the customer the point of integration.
Because once individuals can share verified data about themselves, anywhere, in any context… then that stream of authentic, real and trusted customer information becomes a new customer channel.
In fact, it becomes the most valuable way to interact with customers full stop.
This perspective changes how we can think about data security and hacks. It changes how we think about privacy. And how we can explore entirely new business models and revenue streams.
And it only requires a 2 mm shift. To look differently at the same problem. To start on the customer side.
As they say “You need to fall in love with the customer problem, not the product.”
With all the market noise around AI, digital wallets, NFTs, digital transformation and the swirling fall out of Web2, it’s never been more important. To start listing to customers again. To move from push to pull.
It’s finally time to pay attention to the customer side. It’s not only better for your customers, but it’s better business.
Welcome to the future of being a digital customer. And welcome back to the Customer Futures newsletter.
In this week’s edition:
Plugging Generative AI solutions onto weak foundations will make a lot of customer experiences even worse
Ask customers questions so you can give them the right experience… and build more meaningful relationships
The Road to Ubiquitous Digital Identity
It is not enough to design products and services to be delightful… they need to be trustworthy too
Big Tech Isn’t Prepared for A.I.’s Next Chapter, but Open source is changing everything
European Parliament vote pushes AI Act significant step forward
… plus other links about the future of digital customers you don’t want to miss
Let’s go.
Plugging Generative AI solutions onto weak foundations will make a lot of customer experiences even worse
The last 15 years have gifted us some spectacularly bad examples of digital transformation. Well-meaning business leaders deciding to ‘add digital’ to existing products or processes.
We’ve all experienced it. Broken customer journeys. Websites that bounce to the company app. The app suggesting you to call in. Or to visit the FAQ pages. And to log in. Again.
Where password reset has become the new login.
My favourite: companies requesting the same information over and over again. It’s as if the whole company has collective amnesia about its very own customers.
Sadly it’s becoming clear we are about to see the same thing happen with generative AI. Clumsy implementations across business departments, trying to keep up with the latest shiny innovation.
Like so many digital transformation projects before it, ‘adding AI’ may make things worse for the customer, not better.
Take customer service, for example. Unless there’s a concrete foundation of understanding the customer journey - of understanding customer context - it may very well end up in a new customer experience disaster.
Here’s the thing. Building a solid customer service foundation isn’t rocket science. As Alex Mead, customer experience guru points out:
“Brands that simply post a phone number or email address on their 'Help / Contact Us' pages are so out of touch with modern day customer service experience designs.
“And others that simply think FAQ's and a Chatbot are the answer are just as bad... Take a look at the Help pages for some very big brands, and they are nearly all woeful...
“If you're managing your company's Help / Contact Us journeys, then please do the following:
Ask the Customer to 'sign-in' first - and make that sign-in process incredibly easy (unless of course they are not yet a registered customer)...
Show them their orders, quotations, deliveries, enquiries, complaints, even recent product browsing history...
Let them click the 'thing' from the above list they need help with...
Share with them any proactive updates (delivery status, flight departure time status, order activation status, complaint status etc) and any potentially helpful updates / insight on the 'thing' they need help with...
If they still need help, then let them choose the channel they want to use... (Customers are not stupid, they will not choose to call if it's not urgent)
Route their need to the best agent, with the right skills, to help them... Make sure all the data they have just provided is immediately available to the agent that interacts with them...
Capture a summary of the support conversation in a CRM record, summarising any actions taken / still needed in this CRM record, and share it with the customer in the list of their 'things' in step 2 above...
Continuously listen and analyse why customers need to get in touch, and improve the processes that cause any avoidable need for customer service support...
“Simply plugging Generative AI solutions onto weak foundations will probably make a lot of experiences even worse...”
Ask customers questions so you can give them the right experience… and build more meaningful relationships
Today’s businesses are stuck in an endless spiral of trying to hyper-personalise customer touch-points… by guessing. And attempting to collect as much personal data as possible in order to do so. All, in theory, to anticipate a customer’s every need.
Sadly most businesses screw this up in two ways.
First, they stumble over the privacy line. Asking for too much personal data. Or freaking people out by knowing too much (“how on earth did they find that out about about me?”)
But second, due to incompetence, they trip over the clumsy line too. Trying to sell me things I already have. Or things that are related to previous purchases, but which I actually now hate.
Wouldn’t it be easier, faster, and perhaps more trustworthy, just to ask customers directly?
“Faster horses!” I hear you cry. And “Steve Jobs told us that customers don’t know what they want, you just have to show them!”
Yes. But both Henry Ford and Steve Jobs were visionaries. Disrupting the market with once-in-a-generation products.
If you are genuinely inventing a new category, then knock yourself out. Don’t listen to customers. For everyone else, it’s about going back to basics and truly getting under the skin of the customer problem.
Because early economists described it ‘demand and supply’ well before we called it ‘supply and demand’. It’s really customers first. Needs first.
The idea of ‘intent-casting’ has been around for years. Enabling customers to signal what they want rather than relying on the marketing and product teams to guess.
It was made popular in Doc Searls’ book ‘The Intention Economy’. (A must-read, by the way, for anyone interested in the future of being a digital customer… which I expect you might be…).
Perhaps with the arrival of Personal AI, it’s an idea whose time has come. It requires thinking about the customer side first, rather than just deploying AI to optimise business operations and sales.
This excellent piece by Stuart Dunlop sums it up well:
“…you don’t go into a restaurant and give the waiter the silent treatment when they ask for your order. You either choose what you want from the menu or you might ask for a recommendation. There’s a human interaction. And sometimes, if you get a good waiter, you might even build a rapport and end up having a more meaningful and memorable experience.
“Imagine if Uber Eats didn’t ask you questions, it would just be a button, “Feed me!”, and then some random meal would show up at your door (which might actually be fun now that I think about it!).
“In his ‘…2nd order effects of generative AI…’ article, Scott Brinker talks about how as AI-generated content exponentially grows, so will spam. As a result, we will see a world where customers will start to block out even hyper-personalised experiences and we will move to a model where buyers will “pull” content.
“This for me hits the nail on the head. This is the vision marketing strategists should be setting for themselves. And even more so in the world of customer value management. Too many companies are still hooked on casting the net with mass emails and performance marketing campaigns that require big budgets to ensure effectiveness.”
“I’m not saying stop top-of-the-funnel marketing, simply let's get better at understanding what customers' intents are. In other words, don’t be afraid to ask.”
The Road to Ubiquitous Digital Identity
For over two decades, the Internet Identity Workshop (IIW) has run twice a year in Mountain View, California. So it was about time that a sister ‘unconference’ was run in Europe.
It finally happened last week in Zürich, branded the Decentralised Identity unConference Europe (DICE).
One of the smartest voices in European identity and payments, Douwe Lycklama was there. He came away believing that the path to widespread adoption and ubiquitous usage of ‘user-controlled’ digital identity is far from clear:
“My main message is this: the digital identity world is rich with technical prowess, but we risk side-lining some major non-technical requirements. It appears to me that mass adoption, interoperability on a service level (the combination of legal, functional, technical, operational, etc.), user propositions, governance and effective communication are yet to be addressed with the requisite clarity and priority.
“The representation at the conference was as diverse as it could get, with government officials from the Swiss eID team, UK, Austria, Bhutan, and the USA (Department of Homeland Security) alongside a host of researchers and start-ups. However, despite the broad representation, there were little to no structured discussions on addressing these essential topics in an organised, collaborative manner.
“In the SSI space, there are three principal roles: issuers, holders, and verifiers. Issuers distribute tokens, holders store them in their digital wallets, and verifiers decide what requirements they impose on the tokens presented by holders. The Verifiable Data Registry is instrumental in matching the credential needs of verifiers with available wallets and their contents.
“But several pertinent questions seem unanswered such as:
1. How can holders know in advance whether their wallet fulfils the needs of a verifier from whom they would like to receive a digital service?
2. How can a verifier determine which wallets contain credentials that satisfy their requirements?
3. Who determines who comes on ‘trusted lists’ of issuers, verifiers, and wallets?
This lack of clarity may introduce unnecessary complexities for users (i.e., issuers, verifiers, and wallets), which could stifle the mass adoption of decentralised digital identity.”
It is not enough to design products and services to be delightful… they need to be trustworthy too
Few understand digital trust as well as Sarah Gold. Not just about the theory, but about what it takes to build trustworthy digital products from the ground up. Her latest point of view is a must-read piece on how to fix the digital trust gap:
“We increasingly rely on, delegate to and depend on technology-enabled services. They are setting new expectations for what is possible. And those changing expectations impact the kinds of services being created and the complexity of the underlying technology being used.
“And with every new technology, new trust issues are introduced. All the while, services don’t yet help us understand their trustworthiness. We don’t know in whose interests they operate, or how to build appropriate guardrails.
“This creates a trust gap. The trust gap manifests every day from the inherent distrust you feel when checking a consent box. It is an automated fine that you are helpless to correct, or an app that doesn't work for you because its algorithm is discriminatory.
“For business, the gap shows up in a myriad of critical risks from algorithmic bias, to regulatory readiness, to being left behind by the next technology wave. For product teams, it's the blocker of wanting to do the right thing but being faced with research papers not relevant to revenue drivers in a product. It's also hard to prioritize trust when your backlog is full.
“Meanwhile, the world is speeding up. It has become more automated, connected, and complex. As a result, the trust gap widens. Unless we act, trust will collapse. There is no time to waste when it comes to closing the trust gap.
The importance of your trust cannot be overstated.”
Big Tech Isn’t Prepared for A.I.’s Next Chapter, but Open source is changing everything
I’ve long written about the difference between personal and personalised. Something ‘personalised’ belongs to someone else. By definition, it must be ‘other’. Whereas ‘personal’ is almost always used to mean ‘mine’.
Most of the AI platforms being developed right now are ‘personalised’. For sure they are putting amazing tools in the hands of people. Enabling incredible insights. Enormously helpful every day.
But are those tools really mine? Who else has access to the data? Who else can see the outputs? What datasets were used to help create the model? Is it aligned with my values, and does it really understand my context?
Personal AI needs to be personal. Because we can’t trust it unless we can control it. To use it for whatever we need in our own lives.
Now, most of the market hype around AI is about what these new smart tools can do. But we need to look harder at why they do it, and how. Because the issues around digital trust are not yet cutting through.
I suspect that these questions are a digital trust bomb waiting to go off.
A symptom of all this - that our AIs aren’t really personal - is that many are fretting about the AI dominance of Big Tech. That these models are not really under our control. Not really.
But the tables might be turning. In a recent piece for Slate, Bruce Schnier - one of the grandfathers of internet security - sums it up well:
“The large tech monopolies that have been developing and fielding LLMs—Google, Microsoft, and Meta—are not ready for this. A few weeks ago, a Google employee leaked a memo in which an engineer tried to explain to his superiors what an open-source LLM means for their own proprietary tech. The memo concluded that the open-source community has lapped the major corporations and has an overwhelming lead on them.
“This isn’t the first time companies have ignored the power of the open-source community. Sun never understood Linux. Netscape never understood the Apache web server. Open source isn’t very good at original innovations, but once an innovation is seen and picked up, the community can be a pretty overwhelming thing. The large companies may respond by trying to retrench and pulling their models back from the open-source community.
“But it’s too late. We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive A.I. landscape.
This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players.”
European Parliament vote pushes AI Act significant step forward
Most people agree that we need to regulate AI, and fast. But it’s a rapidly changing situation. And it will need to span a patchwork of countries, data jurisdictions, existing frameworks and politics.
Some are comparing the challenge to the regulation of nuclear weapons in the 1940s. But with AI, it’s different. It will be much harder.
First, even if they wanted to, a teenager couldn’t build a nuclear reactor in their bedroom. But today they can develop their own AI platform. It’s now easy and fast, thanks to the open-source AI models now freely available.
And those bedroom AIs won’t have the same kinds of security, privacy or societal guardrails in place… the same guardrails we’re now demanding of the big tech players.
The toothpaste is out of the tube.
But second - and perhaps more importantly - nuclear weapons can’t themselves make other nuclear weapons. But AIs can write code… and can (and already do) generate more, even smarter, versions of themselves.
Most regulations usually protect yesterday from last Tuesday. But with AI, putting proactive rules and regulations in place is more important than ever.
Now, this newsletter is focussed on the future of being a customer. How we can build a better digital tomorrow based on trust, transparency and control. A new digital economy and society organised around people, not (just) organisations.
It’s clear that artificial intelligence - and its regulation - will be central to the future of the digital economy. And so will be central to the future of being a digital customer.
It’s worth digging into the EU’s position on all this. From the Institute of Privacy Professionals:
“The EU’s AI Act takes a risk-based, tiered approach to regulating AI and includes outright bans of high-risk AI applications. Parliament's text prohibits the use of real-time biometric identification systems — a point of contention, particularly during the last week, among the European People's Party during Parliament's amendment process.
“Other bans include biometric categorization systems that use sensitive attributes, such as gender, race, ethnicity, religion and political affiliation. Predictive policing technology that uses profiling and geolocation, for example, would be banned, as well as so-called emotion recognition systems in the workplace, schools and law enforcement. Massive data scraping of facial images from the internet or closed-circuit television for facial recognition databases would also be prohibited.”
Looks like this new EU regulation will be in place by the end of the year. And it will have sweeping implications for the AI market, including the new category of ‘Personal AI’ platforms. Pay close attention to this one.
📌 OTHER THINGS
There are far too many interesting and important Customer Future things to include in this edition. So here are some more links to chew on:
The difference between a use case and a business case: Timothy Ruff WATCH
MEPs endorse blanket ban on live facial recognition in public spaces, rejecting targeted exemptions: Euronews READ
Digital Credentials and Digital Identity Are Not The Same: Andy Tobin READ
Selective Disclosure for Verifiable Credentials: WaltID WATCH
Mapping the Non-Technical Problems for Digital Identity: Lissi READ
Consumer Data Rights are driving innovation and opening up benefits for consumers: Australian Competition and Consumer Commission READ
The risks of adopting verifiable credentials across the private sector: SSI orbit LISTEN
How to enable a web2 experience with web3 infrastructure using multi-party digital wallets: Coindesk READ
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at Twitter and LinkedIn.
A final question for you: Was today’s newsletter valuable, or nope? Let me know by clicking below or ignoring: