Customer Futures Stories: GDPR's crisis point, the future of (fintech) trust and why login is so broken
Plus: The screenless Personal AI, banking's consent fatigue and Apple's planned AI-Powered Health Coach
Hi everyone, thanks for coming back to Customer Futures. Each week I unpack the ever-increasing and disruptive shifts around digital customer relationships, personal data and customer engagement.
This is the weekly STORIES edition. Covering the important, and sometimes less obvious, updates from the market. If you’re reading this and haven’t yet signed up, why not join hundreds of executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
STORIES THIS WEEK
As the noise around Personal AI grows, many are asking questions about engagement. What’s the smartest user interface? Do we even need screens? What about terms and conditions - and will people read them anyway? AI interfaces are going to matter. As text-based chat seems to be catching fire, are there alternatives?
More Personal AI announcements of course, but this week we also dive into digital ceremonies - what are they, and do they work properly? Next week it’s five years since the GDPR came into force. Has it really made the dent into data protection practices that everyone expected, and many hoped for?
Welcome back to the Customer Futures newsletter.
In this week’s edition:
The Future of (Fintech) Trust
On Digital Ceremonies - and Why Login is Broken
A (Personal AI) Financial Expert In Your Pocket
AI’s Biggest Risk Isn’t ‘Consciousness’… it’s the Corporations That Control Them
Intelligent Interfaces Of The Future
Canada’s Opportunity To Collaborate On Verifiable Credentials
AI Has A Privacy Problem, And The Solution is Privacy Tech, Not More Red Tape
GDPR's Crisis Point
… plus other links from around the web you don’t want to miss
Let’s go.
The Future of (Fintech) Trust
Simon Taylor is probably one of the most important voices in Web3 and FinTech. He writes weekly about the disruption and future of finance in his excellent Brainfood newsletter (I recommend subscribing). 30,000 of the most important people in finance hang on his weekly ‘rants’. Here’s his latest take on digital trust:
“The psychology of trust is critical to understand in finance and money because money is vital to our basic motivators and requirements. Shelter, safety, food, and water have a cost associated in many places. To survive and thrive, we need these things.
The ultimate rug pull is when trust is shattered about money. We place our money in banks and take for granted that it will still be there when needed. But behind the curtain, money moves around the banking system like ocean currents. It seems permanent, but it's also a complex system that impacts everything on the planet.
Banking crises, bipartisan politics, inflation, and instant 24/7 social media have changed the nature of trust in ways we're only beginning to grasp. Today trust isn't just about large institutions with marble halls sounding official. It's not just about the opposite of that and only trusting a TikTok influencer. It's halfway between the two.” READ
On Digital Ceremonies - and Why Login is Broken
Ian Glazer gave an excellent keynote talk at the recent EIC conference in Berlin. All about today’s digital and analogue ‘ceremonies’. What they are and why they matter. Critically, he asks why our real-life and digital-life experiences are so different.
Whilst his talk uses a bit of digital identity jargon, it’s worth watching to understand the login and ‘authentication’ mess we’re in… and what’s coming. Here’s a clip (emphasis mine):
“Our ceremonies in the analogue world are often really different to what we do in the digital world. In the analogue world, we do an ‘introduction’: I introduce Vlad to someone. Yet online it's ‘registration’. The other thing is in the analogue world we have a process of ‘recognition’: how one party recognises another. Yet in the digital world, we have ‘authentication’.
I believe that our existing ceremonies for authentication do two things simultaneously. First, they set a bar too high for all people. All the people about whom we want to interact with at high assurance. And second, it sets a bar too low to prevent adversaries from clearing it. This is simultaneously horrible…” WATCH
A (Personal AI) Financial Expert In Your Pocket
The latest astonishing example of Personal AI, this time with your money. Worth taking a look at Parthean AI: ChatGPT for Personal Finance.
“We’ve integrated the most cutting-edge AI tools with your personal financial information. This allows you to ask any question about your money and get answers that are personalized and immediate: Where is my money going? How much can I afford to spend on a home? Calculate the Sharpe ratio for my investment portfolio. When can I retire? Am I paying too much in rent?
But it can do so much more... like build immediate & personalized plans. Whether you’re paying off student loan debt, getting married, having a child, or just want to go on a trip with friends, Parthean AI will help you build a plan that stays within your budget.
Help me improve my credit score. I want to pay off my debt in 2 years, can you build me a plan to pay it off? How much can I afford to spend on my wedding next June? Financial plans used to cost hours of your time or thousands of your dollars. Parthean AI does it immediately.” READ
AI’s Biggest Risk Isn’t ‘Consciousness’… it’s the Corporations That Control Them
The brilliant Meredith Whittaker, former Googler and current Signal president, on why we need to worry about corporate control of AI.
“We need to dig into what is happening here, which is that, when faced with a system that presents itself as a listening, eager interlocutor that’s hearing us and responding to us, that we seem to fall into a kind of trance in relation to these systems, and almost counterfactually engage in some kind of wish fulfillment: thinking that they’re human, and there’s someone there listening to us. It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.”
It’s one of the reasons why we need to understand the difference between ‘Personal AI’ and ‘Personalised’ AI. AI RISKS, PERSONALISED AI
Intelligent Interfaces Of The Future
Great article from Christopher Reardon (ex-Meta, Responsible AI) on what product owners need to consider during the gold rush of AI.
“Having a well-crafted mission statement or posters of your principles and values on every office wall isn’t enough to ensure alignment nor impact — researchers and developers need explicit top-down direction reflected externally to maintain accountability. Governments are forming clearer opinions on how AI should work and setting progressively stricter policies to protect people and society. Employees, stakeholders, and shareholders should be on the same page about balancing making money while preserving and prioritizing people’s safety and agency over their data.” READ
Canada’s Opportunity To Collaborate On Verifiable Credentials
The Digital ID & Authentication Council of Canada (DIACC) has published a short report on Verifiable Credentials (VCs). Essentially saying that VCs hold tonnes of promise, and that organizations should work together to enable their broad adoption.
“Verifiable credentials can revolutionize how we store and share personal information. This technology promises greater privacy, security, and convenience in many contexts by enabling individuals to control their data. However, as with any new technology, there are significant challenges to be addressed by developers, implementers, and adopters if society is to realize its potential fully.
Despite the challenges identified in this paper, the adoption of verifiable credentials is gaining momentum, and there is increasing interest in their potential applications across various industries. As technology evolves, we will likely see further innovation and refinement. Verifiable credentials represent a significant opportunity for individuals, organizations, and society. By addressing the challenges and working collaboratively to build a robust and secure ecosystem, we can ensure that this technology delivers on its promise and enables us to create a more trustworthy, decentralized, and secure digital future.” READ
AI Has A Privacy Problem, And The Solution is Privacy Tech, Not More Red Tape
As AI becomes ever more pervasive and personal, many are looking to privacy-enhancing technologies like end-to-end encryption in our increasingly AI-first world. Fighting (tech) fire with (tech) fire:
“AI's privacy problem is a ticking time bomb. The common response to this is to call for more policies, more paperwork, more box-checking, and more bureaucracy. But let's be frank here: Do we really think a few more policies, papers, and checks will fix AI’s privacy problem? It's like using band-aids to cover a bullet wound. Red tape may give an illusion of control, but underneath it the real issues persist. Our personal data continues to flow unchecked, often ending up in the hands of advertising partners, data brokers, political campaign organizations, law enforcement, and other third parties whose names we wouldn’t recognize.”
PRIVACY TECH, E2E ENCRYPTION FOR AI
GDPR's Crisis Point
It’s been five years since the GDPR came into effect. “4% of global turnover if you don’t abide!” they yelled. Yet by the end of last year, over 60% of the 159 enforcement measures were simply' ‘reprimands’. Johnny Ryan digs in:
Almost five years after it was implemented, the GDPR is rarely enforced against Big Tech. Few major EU cases have resulted in serious enforcement measures. The European Commission must act.
Unlike any other country's enforcement authority, 75% of the Irish Data Protection Commission's GDPR investigation decisions in EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action. EEA data protection authorities’ budgets are rising. Despite this 10 national DPAs still have budgets under €2 million.” READ
OTHER THINGS
There are far too many interesting and important Customer Future things to include in this edition, so here are some more links to chew on:
Cheqd: Making privacy-preserving digital credentials fun WATCH
‘Consent fatigue’: Banks warn of overreach in protecting data privacy READ
The Disappearing Computer: Humane’s Screenless Personal AI WATCH
Self-service TSA facial recognition software now available at select airports READ
The Digital Markets Act for Privacy Professionals READ
Apple Plans AI-Powered Health Coaching Service, Mood Tracker and Health App READ
Lessons Learned From the NHS Staff Passport LISTEN
Siri Founders give AI 2034 predictions WATCH
Thanks for reading this week’s edition! If you’ve enjoyed it, and want to learn more about the future of digital customer relationships, personal data and digital engagement, then why not subscribe: