Customer Futures Perspective: I have bad news... your Personal AI isn't really Personal
It's time to start asking a new set of questions about Personal AI. And looking at how it all fits into the emerging 'Customer Stack'.
Hi everyone, thanks for coming back to Customer Futures. Each week I unpack the fundamental shifts around digital customer relationships, personal data and customer engagement.
This is the PERSPECTIVE edition, a weekly view on the future of digital customer relationships. Do let me know what you think - just hit reply!
If you’re reading this and haven’t yet signed up, why not join hundreds of executives, entrepreneurs, designers, regulators and other digital leaders by clicking below. To the regular subscribers, thank you.
PERSPECTIVE
I have bad news… your Personal AI isn't really Personal
Today’s edition is slightly longer than usual, and dives into
New Personal AI models
Are we asking the right questions about our data?
The new ‘Customer Stack’
Building digital trust
Are these AI really personal?
So why not grab a coffee and a comfy chair.
Let’s go.
This week, the latest in the new crop of Personal AIs was released by the co-founders of Google DeepMind and LinkedIn: called Pi.
It looks pretty impressive, and is an exciting example of what a chat-based UI can be. I have become increasingly curious about privacy, security and bias in these personal AI tools, so started by asking Pi about my data:
So Pi can’t share that information, and can’t delete any data I give it. Ok, so first we need to think carefully about what data we are sharing.
I’m told to ‘read the privacy policy’. No worse than a contact centre I suppose. But will many people do that?
Let’s push on:
I’m now wondering how Pi and its owners Inflection AI make their $225 million back.
Now this is better. Let’s get into values.
Back to the policy again.
Look, this is early in the AI cycle. It’s just the start of what’s possible. And I’m quite impressed with the interface. But I’m left with questions about… trust.
Is it a little.. obstructive when asked? Transparent enough?
If the core value of these personal AI tools is about continuously learning from users, and from their data, surely trust should be central to the conversation, not just buried in a privacy policy?
Let’s imagine someone came to your door and offered a personal recommendations service. You’ve never met them before, but decide to give them a go.
You might ask some of the questions above. Including about how they make money. If they pointed you back to a long policy document, does that make things more trusted, or less?
This matters because A LOT of people are about to share A LOT of personal data with A LOT of Personal AIs.
Let’s dig deeper.
Asking the right questions
We’re seeing an explosion of Personal AIs - I’m counting new ones every day.
It’s becoming clear that the next-generation digital economy is going to be powered by two things: personal data, and AI.
With the explosion of new opportunities will come enormous risks.
Of discrimination. Of bias. Of security. Of privacy. Of manipulation.
So we must be mindful, and deliberate. Because humans are going to need control over their (personal) data, and the rich and deep (personal) insights that come from it.
If we’re honest, we need to have a decent grasp of a core set of questions:
WHY are we using these AI tools? To create value? Help us avoid risk? Reduce suffering? Save us money?
WHAT data are we feeding these AIs tools? WHAT data models are creating the insights?
WHO decides which data to feed the models? And WHO provides the data? WHO has access to it? WHO runs the analysis, and WHO gets to see the results?
WHERE is the data collected? WHERE it is processed? WHERE do the results go?
HOW do they handle personal data? HOW it is collected? HOW are the insights revealed? (this is particularly important if it’s about health data? money data? relationship data? Is the ‘chat’ interface the right medium?)
You get the point.
I’m not being pedantic. These questions matter.
Should we think of these new AIs as general purpose ‘advisors’? We would ask these questions of a human doctor. We would ask them of a new digital service offering pension advice. Wouldn’t we?
These chatbots make no claims about accuracy. Or warranties. Or take any responsibility. (just read the policy). So it’s a fun chat tool, right?
On privacy and data protection, they say that nothing can be full-proof, and they do their best to keep things private… that they meet GDPR.
But they don’t know…and can’t completely know, can they?
A ‘Customer Stack’ is emerging
Zoom out and you can see that this list of questions points to a set of needs that live on the customer side. To a set of technologies and capabilities that can work with and for the individual.
Together let’s call it ‘The Customer Stack’.
The customer stack helps people get things done, to make decisions, and address different ‘jobs-to-be-done’.
So what are these customer capabilities?
It’s things like personal data storage. Proofs of status. Identity. Automation. Reputation. Personal applications for specific use cases or customer journeys. Insights. Data Sharing. Assets. Payments. User experience. Integrations with other services.
It goes on.
I’ve written a lot over the years about ‘digital wallets’. But with this customer stack lens, you can see that wallets will be just one of the capabilities.
And look closer at each customer capability, you’ll see different solutions, different options:
‘Proofs of status’ becomes possible on the customer side with portable Verifiable Credentials, zero-knowledge proofs (ZKPs), and different data formats and watermarks to verify data.
Customer-side storage becomes possible perhaps with Solid ‘Pods’, Decentralised Web Nodes or Personal Data Stores. Choices about individual ‘sovereignty’ and privacy.
Reputation can be addressed on the customer side with digital wallets, more verifiable credentials and decentralised identifiers (DIDs).
The data sharing layer is enabled by new ‘smart agents’. Helping me move my data around, setting rules about what goes where, who can ask for what, and keeping track of who has been sent what, when and how.
The user experience layer can be a browser, and app, or a chat interface like Pi. Some are even going for no-phone and no-screen option. This will be a particularly exciting - and important - part of the customer toolkit.
And of course many more customer capabilities will emerge over time.
The ‘customer stack’ becomes a really important way to think about building and designing digital tools for customer.
Especially when it comes to AI. And how things like digital wallets, agents and customer AI all come together to serve the individual.
Therefore it’s the Customer Stack that’s going to need to be able to answer the most important question of all:
Will it be trusted?
Think of your personal data, your storage, you reputation and personal insights - the whole customer stack - as if it’s your home.
Would you let just anyone in? Different people would get different rules, right?
Strangers? Nope.
A plumber? Yes, but only to do One Specific Thing.
An estate agent? They only get to look at things and maybe take pictures. Don’t touch anything.
A cleaner? Now that becomes more personal. They get to move things around. But also follow rules about what not to do.
A friend? Maybe they won’t randomly walk into the bedroom but they can help themselves to things in the fridge. Mi casa su casa.
The point is that each different relationship matters. Each has context. Does your Personal AI afford the same rights as a close friend? If not, where are the boundaries?
So will personal AI be trusted?
Personal or not?
It comes down to this difficult word ‘Personal’.
Here’s my simple test to see if it’s really on the customer side.
Ask: is it ‘PERSONAL’ or ‘PERSONALISED’?
Here’s the difference:
Personal means mine. My things. My places. My reciepts. My needs. My context. My data.
Personal means intimate. Knowing a lot about me. Maybe even detecting things about me that I can’t see (or choose to ignore). Just like your best friends can spot your change in mood before you do.
Personal means trusted-by-default. Rachel Botsman describes trust as “a confident relationship with the unknown”. If something is ‘personal’, by definition it means already known. It’s already trusted. It’s implicit.
Whereas…
Personalised means someone else’s thing. A standard version of a product or service, but somehow tailored to me given some preferences. And those are usually inferred rather than stated: they look at my past behaviour and purchases rather than ask me.
Personalised means other. The thing comes from someone else, somewhere else. We need to take steps to decide if it is trustworthy. We have to ask questions about trust.
Personalised means it’s also serving someone else. In whose interests is the service acting? On what basis is it making a recommendation? How does it make money? Does it have enough experience/ expertise to make a solid judgement?
Someone else’s AI
Let’s look at AI again. Strip it back and it all comes down to data. Specifically, personal data. My Data.
People joke that “The Cloud just means someone else’s computer.” It reminds us that these cloud platforms are elsewhere. Cloud providers run a service, and there are new requirements for trust.
Most of the fuss about data transfers outside the EU - lugging personal data over to the USA for processing for example - has been a fight about such digital (mis)trust. As Max Scrhem’s famously pointed out, when the data is transferred somewhere else we don’t know how that data is being protected (or not). If the data remains private (or not). If the same citizen and consumer protections are afforded ‘over there’.
So let’s look at Personal AI in the same light.
Are these tools really ‘personal’? Or are they really just ‘personalised’?
Are they yours or someone else’s?
If ‘Cloud means someone else’s computer’, then perhaps “Personalised AI means someone else’s LLM.”
Looking back at the ‘Pi’ platform launched this week, it’s very clearly Personalised. And done incredibly well. But it’s theirs. It’s not mine.
These new personalised AIs are going to become breathtakingly intimate with us. As Yuval Noah Harari has predicted since at least 2017, these AI are going to know us better than we know ourselves.
So if an AI is really run by someone else, should we have some boundaries?
What’s stopping from an AI really being ‘mine’? Is it physics (it’s too slow, too complicated to run the AI on your local phone or laptop) or economics (it’s cheaper to run the AI centrally)?
Or something else like the skills of the people using it? Will it be too technical to run? They said that about software applications once. Now just look at the App Stores.
So we are going to need to differentiate ‘Personal AI’ (mine) from ‘Personalised AI’ (theirs).
Just like in the movie ‘Her’. At the end of the film you find out that the AI is really personalised, not personal. (if you have seen the film, you’ll know that I mean).
So this is the bad news I brought up at the start. Your Personal AI isn't really Personal.
It’s personalised.
So are you going to trust it?
As the tech on the customer’s side, this new Customer Stack, evolves, and AI plays an increasingly important role, we need to be paying attention.
Not just to the value created, but HOW it works and WHY.
While regulators continue to make the point that AI isn’t a new special category - that existing laws can help keep these tools in check - we need to start asking different questions.
About trust.
About personalisation.
And about intimacy.
Because it’s a new dawn for really understanding customers.
But this time, it’s personal.
Thanks for reading this week’s edition! If you’ve enjoyed it, and want to learn more about the future of digital customer relationships, personal data and digital engagement, then why not subscribe: