Customer Futures Stories: How much consumers will pay for privacy, Google releases their ID wallet and spotting AI content using icons
Plus: Ranking Digital Identity projects, paying to remove your personal data from AI platforms and GDPR’s double-edged impact on the ad market
Hi everyone, thanks for coming back to Customer Futures. Each week I unpack the ever-increasing and disruptive shifts around Personal AI, digital customer relationships and customer engagement.
This is the weekly STORIES edition. Covering the important, and sometimes less obvious, updates from the market. If you’re reading this and haven’t yet signed up, why not join the growing number of senior executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
STORIES THIS WEEK
First, a note about the future of Personal AI. There are three videos you must watch, if you haven’t already:
Yuval Noah Harari on The Future Of Humanity
Mo Gawdat, Ex-Google Officer, on The Dangers Of AI
Tristan Harries on The AI Dilemma
What these talks have in common is that they each outline a clear, coherent view of how it’s going to feel when - not if - computers get smarter than humans.
Not just a little bit smarter, but a 100x or 1000x smarter. And each expert believes this will happen over months, not years.
The story about Personal AI has so far been about clever little helpers that work for you. To get things done. To give you recommendations. Perhaps all on your device.
But it’s becoming clear that the AI tools being developed today are like playing with Lego. When in fact the AIs that are coming down the pipe - significantly more powerful - will be more like building a city.
Personal AIs will quickly become our sidekicks, our assistants, our advisors. And we’ll very likely start trusting them a great deal. Because they will prove that value to us daily.
I have written for a long time about digital trust, transparency, and control. But in this new AI-first world, either you own the customer interface or you become training data for someone else.
So what does digital trust mean in that context?
Every interaction you have with these intelligent tools - regardless of who runs them - will be training data on you. The more you personally interact with them, the better they get to know you.
They’ll learn how you respond in different contexts. And without being directed, they’ll also model how you’ll behave in future situations. And then work out how to push you in one direction or another.
And THAT is the clincher. Fantastic if these Personal AIs are aligned with your own interests. But:
How do you know what’s best for you? And how will you be able to describe or codify that and tell the AI?
How would you even know if the Personal AI’s nudges, their recommendations, are aligned with those interests you’ve shared?
It’s time to redefine and set up new models for digital trust.
I don’t just mean more transparency. About clearer T&Cs. I mean new ways to interact. New principles to ensure these exponentially smart platforms will work with us and for us. And not just do things to us.
Utlimately, very few humans care about digital wallets. Most of us don’t care about our personal data. And we certainly don’t want to manage our identity.
Instead most humans want to get rid of our daily hassles. Our stresses. Our burdens. We want autonomy, to be self directed and creative. To live freely and privately.
AI is going to help with much of that. But our digital agency - and the tools we use every day - are at stake.
This Customer Futures newsletter is about learning in public. Distilling the customer signal from the digital noise. But it’s fast becoming about how our digital tools need to start on the customer side - to empower individuals in an increasingly AI-first world.
Yes, it’s about redefining customer experience. Yes it’s about how digital wallets will empower people to connect in new ways. But the stories about Personal AI are really about reclaiming agency and human connection.
Welcome to the future of being a digital customer. And welcome back to the Customer Futures newsletter.
In this week’s edition:
Google Wallet is getting custom cards and state IDs
Five years in, the GDPR has had a double-edged impact on the ad market
How much will consumers pay for privacy - and for what types of data?
Students are at risk when using tech from the US education sector
New icons that make it easy to spot AI-generated content
… plus other links about the future of digital customers you don’t want to miss
Let’s go.
Google Wallet is getting custom cards and state IDs
The wallet wars are upon us.
Google’s latest move is much bigger than an identity play. Covering travel, health, tax and the workplace, it’s a huge stake in the ground from one of the world's largest tech companies.
About where and why you’ll use your Android-based digital wallet beyond payments.
The next few years are going to see an explosion of these types of ‘credentials’ products for the real world.
We’ll need to be careful not to see a repeat of the early years of the internet... where it was providers like AOL or Compuserve. Back then, you could go anywhere you wanted online… as long as the network allowed you to. You were locked in.
By the late 1990s, the early web standards were agreed, and the rest (of Web1) is history. The breakthrough was to build interoperable networks, and we could finally join up the ‘islands of information’.
The same will be true of digital wallets and credentials. Are we going to end up with 'disconnected islands of data' and a mess of digital wallets?
I'm hoping this won't be a war at all. Where the players fight over a fixed land, a fixed asset. A zero-sum game.
Instead, portable digital credentials can be more like a new frontier. Discovering a new land full of opportunity. Because if we get this right, it'll be a win for all of us. And not a war at all.
“Google is also working with health insurance company Humana to develop a digital version of the provider’s health insurance card that will allow Humana members to access their insurance information directly from the Wallet app.
“UK residents will also be able to save their National Insurance number (a British equivalent of social security numbers) to their Wallet from the HMRC app. These cards and passes have some additional security compared to things like travel tickets and require users to verify their identity using methods like fingerprint scans or PINs every time they’re added, viewed, or used.
“Health insurance cards and passes that similarly contain sensitive information will be labeled as a “Private Pass” within Google Wallet.
Finally, users in Germany can now save a Deutschlandticket — a monthly subscription ticket that can be used across all local public transportation — to their Google Wallet. Google has also teased that it will start introducing corporate badges in Google Wallet later this year, allowing employees to securely access their workplace without a physical staff pass.”
Five years in, the GDPR has had a double-edged impact on the ad market
They say that Europe’s best export is regulation. And perhaps GDPR’s main achievement has been highlighting how personal data is used around the world. And the need to bring transparency into the digital economy.
But what of the tangible impacts? For the worst offenders, fines felt have barely felt like a parking ticket. And we’re left with a confusopoly of consent buttons.
Below the surface though, it’s worth digging into the impacts on the ad market:
“Meta will probably appeal. Then there’s the possibility that lawmakers in Europe and the U.S. can agree on a mechanism known as the Data Privacy Framework that will let Meta and other companies to legally transfer the data of EU individuals to the U.S. In the meantime, any company which needs to transfer personal data to the U.S. will remain utterly confused.
“This is the GDPR in a nutshell: a delicate dance where every step forward feels like three steps back. The wide deviation from the anticipated outcomes for advertising starts to make more sense.
“Facebook, media agencies, programmatic advertising were all meant to be among the biggest losers in the fallout, and yet they came through it relatively unscathed. Even dodgy cookie consent, which was a big bugbear of regulators in the run up to the GDPR, are in rude health. Advertisers still don’t know how cookies — the mechanism that houses the data they use to power programmatic advertisers — are obtained. It turns out pretty sneakily on occasion.
“…In many ways, the fracas over ‘TCF’ is symptomatic of how much the ad industry, especially the buy-side, has adapted to the GDPR. Where possible those stakeholders have tried to replace or even rewrite cornerstones of how personal data is sourced, processed and stored but rarely have they tried to rewrite them entirely. That’s changing now, to be fair, but that’s more due to second order effects of the GDPR than a direct causation of it.”
How much will consumers pay for privacy - and for what types of data?
In news that surprises absolutely no one, it turns out that people care where their data goes. Not about the data per se, but about what Bad Things might happen.
Study after study shouts about the ‘privacy paradox’. That consumers say they want privacy, but don’t behave that way.
Now a UK university has just released a modest study on how consumers actually feel about all this, and if they’d pay for privacy. Including how much…
“Results showed that overall 96% of individuals were willing to pay to avoid sharing their personal data in at least one of the data sharing environments. Banking transactions was the data type people considered the most important to protect, with over 95% of people keeping their banking data private.
“This was followed by medical records data with at least 79% people willing to pay to protect, followed by 72% for mobile phone GPS, 43% for online browsing history and 39.8% to protect social media data. Data collected via loyalty cards, electricity use, and physical activity was seen as less valuable as fewer participants were willing to pay to protect it.
“The researchers emphasize they are not advocating for a market for personal data in which service users are forced to pay for online privacy but to highlight how much value consumers place on the privacy of their data.”
It turns out that consumers might just become a key revenue stream for privacy enhancing tools. Protecting bank transaction and health data could be as high as £28 per month. Browsing history is closer to £12, and loyalty cards £5.
It raises difficult questions about the privacy divide, with only the rich being able to afford it. How this plays out in terms of paying for Personal AI tools is going to be interesting. Especially when the scope of ‘personal data’ will be enormous.
Students are at risk when using tech from the US education sector
Discussions about digital age verification are usually vigorous debates about how we can make online adult spaces safer.
But below the surface, huge numbers of apps already aimed at children don’t have even basic protections in place when it comes to privacy and personal data.
What data is actually collected and why? Who can access it? Where does it go? How is it protected? The Internet Society Foundation has been doing some digging, and the results are problematic:
“An alarming 96% of educational apps share children’s personal information with third parties. Out of these, 78% of the time, the information is shared with advertising and data analytics entities, often without the users’ or schools’ knowledge or consent.
“Approximately 28% of the apps tested were non-education specific, such as The New York Times, YouTube, or Spotify. These apps lack necessary limits and guardrails for children.
23% of school apps expose children to digital ads. This risks personal student data being sent into advertising networks without public oversight. Furthermore, 13% of these apps use retargeting ads, utilizing cookies, search history, and site history to deliver targeted advertising, further compromising student data privacy.”
New icons that make it easy to spot AI-generated content
With the incoming explosion of ‘fake digital everything’, we’re going to need better tools to help us trust digital content. To determine if something is a human-only created work… if it was developed with an AI assistant… or if it was generated by an AI on its own.
I’ve been writing about using verifiable credentials (VCs) to help with this for some time now. VCs are portable, instantly-verifiable digital watermarks.
They mean you can tell 1) where a digital object came from, 2) who is the custodian (who looks after it right now, or ‘holds’ it), 3) if it’s been tampered with, or 4) if it’s been revoked/removed by the source.
But this is all about handling digital objects. Things that are not necessarily ‘human’ readable. And so we now enter a critical - and fascinating - phase of AI development: the interface. How we interact with digital objects.
Many experts (including the three must-watch talks at the top of this newsletter) are calling for urgent new regulations that require companies to reveal who, or what, we are interacting with.
Ahead of those new rules, a Seattle-based design studio has developed a set of icons and design principles to help us visually trust digital content:
“Visual stunts like the swagged-out Pope have already introduced a new vernacular of images “cocreated” with AI. But what’s funny in a viral meme seems less so in political attack ads.
“If the hypesters are right, and AI is coming for every kind of information we encounter—from our work emails and Slack messages to the ads we see on our commutes and the entertainment we consume when we get home—how important is it that we know exactly how much of that information was created with AI? And how can designers make it easier for us to trust the information we’re getting?
“Artefact believes that building trust around generative AI boils down to three principles: transparency, integrity, and agency. The first two—showing who you are (transparency) and meaning what you say (integrity)—are familiar enough concepts from human relationships and institutions. But agency is important precisely because AI isn’t human.”
OTHER THINGS
There are far too many interesting and important Customer Future things to include in this edition, so here are some more links to chew on:
Privacy Lost: A short film about augmented reality futures, risks to human privacy and the threat of AI manipulation: WATCH
A Photographer Tried to Get His Photos Removed from an AI Dataset. He Got an Invoice Instead READ
An excellent high-level summary of how different jurisdictions are approaching AI regulation READ
AI-driven synthetic media: Too real for comfort? WATCH
Exploring Approaches to Digital Wallets READ
OpenAI receives a warning from the Japanese privacy watchdog READ
Verifiable Credentials for the Modern Identity Practitioner WATCH
A ranked list of awesome Digital Identity open source projects READ
And that’s a wrap. Stay tuned for more Customer Futures soon, both here and over at Twitter and LinkedIn.
A final question for you: Was today’s newsletter valuable, or nope? Let me know by clicking below or ignoring: