Trust Is All That Matters In This New Digital Era
It's 2035, and I've just come back from work. It's been weird navigating the world right now, but one thing that keeps me sane is seeing what my friends are up to and having chats with them.
HumeApp is my drug of choice, but there are more apps just like it out there with their own niche (echoes of the social media wars from the 2010's).
I still find it weird having to log in to social media with my face ID. I know why it's required, after all, it's how they prevent bots from entering the new area of the internet, the Real Web. The face ID is compared to the government ID you submit to get an account. Any malicious behaviour, such as IP changes after verification or attempted forgery, automatically flags your account and forces you to reverify.
I'm not 100% thrilled that someone out there has access to my government ID, but it does mean all the conversations I have are with real people. Every account is linked with a government ID, and Real Web works on a democratic trust-based system, where good behaviour shows you as more trustworthy and bad behaviour decreases your trust score.
The Surface Web (or the regular internet, as non-technical people might know it as) is still used quite frequently, but when you need 100% authenticity, the Real Web is the place to go. I miss the days when I didn't need to question whether a high-quality video was real or not. That is no longer a reality.
Trust Will Become The Most Important Thing In The Digital World
Ok, maybe the story I've portrayed above is a bit dystopian and unlikely to happen. But the underlying message is nonetheless valid.
Trust is becoming increasingly important. AI has been the catalyst for that, as it is rapidly becoming better at generating videos, images, voice, and text.
In this world, how do we know what to trust and what not to trust?
The other day, I was scrolling through Instagram, and I came across a kid (who is most likely not real) who deepfakes himself as celebrities (typically women) and then sells the technique to degenerates wanting to copy this get-rich-quick scheme or for their own fantasies.
I really tried my best to point out some flaws in the video, but there were hardly any.
After the attempted assassination of President Trump earlier this week, a video surfaced of a man running through a police blockade in the Hilton Hotel. At first, I thought it was very AI-like; something didn't feel right, and the fact that it was grainy could definitely help cover that up, but alas, it was real.
I've only recently started doubting my skill at spotting AI-generated content, and I work in tech; others don't have the same luxury of knowing what can be AI-generated, how to spot it, and what can't. Yet.
I mean, without trust, most of our modern way of living can be destroyed. For example, imagine you are a kid looking for reputable sources of information to keep up with the news. You were born using YouTube, so you look for a YouTuber, and you get swamped with AI-generated news channels. How can you trust them if they have nothing to lose?
Or imagine you are scrolling through Instagram, and you are looking for a friend's account, and you instead follow someone who is impersonating them.
In these scenarios, you can always fall back on going to mainstream media channels or ones that have been built before AI really became popular. Regarding your friend's page, next time you see them in person, just ask which account theirs is.
But what about when it comes to videos coming out of wars? During wartime, independent sources become the most vital communicators, with no horses in the race. Yet, how can you trust independent news sources if someone could just create a very realistic video of an army committing war crimes? Independent news agents are threatened by this deterioration in trust.
Video evidence in the court of law is already seeing pressure from this, yet they have been holding strong. But that was against older generation video and image models, what about the new ones?
Even with businesses, how can you be sure the AI-generated content you’re using for work is correct? The easy solution would be to keep a human in the loop to verify quality, but AI companies will hate this because it shows humans still need to learn to trust AI. I wouldn’t trust it with my whole heart unless I’d double-checked the work; would you?
I don’t have a single, clean solution to this trust problem, but I know it’s real and it’s going to matter more and more. It will be interesting to see what comes out of it as we move forward.
A bit of a gloomy one today, but one that has been really bugging me recently. Hope you enjoyed the read.
Authentically Written By Lucas Bernardo.