'Real fakes' and 'fake reals': We're going to need a new way to trust online
Recently I came across a very short Tiktok video (posted on LinkedIn).
It shows a clip of how a real-time AI filter can completely change how you look. Worth a watch before reading on.
The video starts with a face filter ON, then it’s removed. The lady comments that ‘These are fun, but are going to be harmful to society”.
You bet they are.
It’s short, but packs a punch. This kind of thing - fake faces, fake voices, fake content, fake overlays - is going to be pervasive *everywhere*.
Computers are speeding up. The capabilities of AI are compounding. Mobile devices are becoming sensory factories.
And a tsunami of content continues to wash over us online.
Perhaps it’s the end of the beginning - the coming explosion of:
Synthetic customers (e.g. personal data, profiles, networks, bots)
Synthetic content (e.g. fake stories/news/art/music, fake narratives, fake policies)
Synthetic contexts (e.g. made up places, histories, events)
We're going to need new thinking. To counter these 'real fakes' and 'fake reals'.
Just watch this Twitter thread of TikToks from older individuals, quite emotional, watching their younger self in real time. It’s amazing, and powerful.
It’s so powerful.
We’ve reached a tipping point. Fun and provoking videos today… but a fundamental risk to the digital economy and businesses tomorrow.
'Verifying' things as we do right now - looking at government documents, checking banking history, sniffing device IDs, mapping meta-data - will not be good enough, nor scale.
“We’re need more personal data! We need more sources to correlate!” the fraud-fighters cry.
But do we?
Here’s another example of a recent voice AI being used to replicate a customer’s voice, this time to illegally access a UK bank account (the famous ‘my voice is my password’ test).
Fraud checks today are backwards. Playing whack-a-mole with more and more data sources to make sense of the risk, the customer data chaos.
Plaid even now calculates the risk of you being fake if they CAN'T see your personal data in a breach. It's a sign that you don't exist, or are fraudulent. It’s mad.
A new approach?
We talk about fraud in business. We talk about fake bots on social media. Fake news in politics. But we don’t yet talk about pervasive fake everything, fake societies and a fake digital economy.
We're going to need new approaches. And we need to start now.
This might come from a new Me2B economy. Or perhaps Ultra Apps. And the ideas of Web7 will help fight some of it.
But frankly it’s going to have to come from us.
We, the people. We, the customers. We, the citizens. With our own new digital tools - sharing our real data, our verifiable preferences, our stated needs.
Ultimately we’re going to need to be provably real.
I’m hopeful about that. But it’s just the beginning, and this is a journey towards digital trust online. Digital trust across our whole lives.
It’s not going to end on TikTok, but it might just start there.
If you enjoyed this and want to know more about the future of personal data and customer relationships, why not sign up: