Welcome to the information apocalypse
“What knowledge have we of anything, save through our own minds? All happenings are in the mind. Whatever happens in all minds, truly happens.” - George Orwell, 1984
The AI technology that shouldn’t exist…
I’ve made it a point throughout my time writing Future of Product to avoid fear-mongering, and (as much as I can) to provide an unbiased lens on both the positive and negative aspects of AI technologies. I’ve spoken with over a dozen AI experts and founders at this point for the podcast (shameless plug), and gotten a glimpse of the miraculous things that AI could deliver for our future. But while I’ve touched on the negative externalities of AI with my guests, I’ve always tried to view the technology they work on through their eyes - which undoubtedly leads to an unintentional positive bias.
But, as I sat down to research a certain type of AI product making the headlines this week, I was blindsided by the sci-fi possibilities it not only presented for the future, but the horrifying reality of the things that it’s already being used for today. I’m talking about deepfakes...
Remember The Shining? I still remember watching Jim Carrey utter those famous lines for the first time “heeeere’s Johnny!”... or wait, was that Jack Nicholson? When memory fails, we all turn to Google to augment our knowledge, to illuminate the cobwebbed corners hiding factoids and memories that have gone dark in our own minds. But what happens when a sizable percentage of the information on Google is not only wrong, but intentionally and purposefully misleading?
Many people were awoken to the ever-growing issue of mis and disinformation back in 2016, when malicious actors utilized social networks to stoke divisions in the US and incite the chaos that led in-part to Trump’s election. Since then, not only has little in the way of solutions been realized, the problem has only grown. Back in 2016, the word “post-truth” was selected to be the word of the year by Oxford Dictionaries, and I would argue that this choice will in retrospect become symbolic of the end of the era of a unified, shared reality tangible to all humans.
It’s obviously rather naive to think that misinformation hasn’t been a part of our daily diet as consumers since long before such a word existed, but to the extent that the average piece of information consumed by a citizen is more likely than not to be “synthetic”, such circumstances have never existed outside of totalitarian regimes like the Soviet Union - until now.
The problem here is undeniably AI. The AI algorithms that relentlessly push misinformation out to the masses, the non-existent or non-functional AI tasked with gauging the veracity of that information, and now, the AI that enables malicious actors to create incredibly convincing misinformation at warp speed. Swirl these three AI problems together and you get a recipe for an entirely new era of human history... the era of “post-truth”. An era that began thanks to AI algorithms, and one that will soon be sent into overdrive thanks to the rapid evolution of generative AI.
Recently, Europol estimated that 90% of the content online will be synthetically generated by 2026 (for those who aren’t math inclined, that’s less than 3 years away). Of course, this shift will be driven by many different “AI” technologies, including those that I’m in favor of - and while I’m certainly conflicted by this duality, I personally see virtually nothing positive coming of the shift to a synthetic internet.
But what does this all really mean? What does a post-“information apocalypse” world even look like? Let’s focus in on deepfakes - assuming that effective regulation isn’t forthcoming, what role will they play in the information apocalypse?
Remember the Access Hollywood tapes that dinged Trump’s numbers late in the presidential race against Hillary Clinton? Imagine a world in which, to a politician’s followers, any gaffe, faux pas, or admission of guilt in a crime is automatically and unquestionably written off as a deepfake. In such a world, what levers are left to hold the politician accountable for their documented actions? How much longer will criminal proceedings take in a post-truth world, where due diligence requires stringently verifying that each piece of digital evidence is actually real? Politicians and public figures could (and let’s be real, will) deny any allegations by simply branding the incriminating evidence as a deepfake. This could severely compromise the integrity of our legal systems and the ability to hold people (especially powerful people) accountable.
Imagine a world where every audio-visual testimony could be refuted as a deepfake. With the proliferation of this technology, we're on the brink of an epoch where not just digital media, but any media depiction of reality once seen as irrefutable, is questioned with an unprecedented degree of skepticism as a default, if not entirely dismissed from the jump. Furthermore, in a world in which media is segmented and personalized to each individual on both an algorithmic and content level, why would someone ever question their own beliefs? If no counter-evidence is to be believed, people will default to their pre-existing worldviews ten times out of ten.
Beyond the courtroom, deepfakes could pose a serious threat to the very concept of personal identity. The post-truth era is the golden age of the scammer. From fraudulent bank transactions to impersonating loved ones, deepfakes are already making us question even our most personal interactions - and believe me, this is only the beginning. In a world where truth belongs to the highest (or most unethical) bidder, trust in our own senses could be eroded, leading to a deep-seated paranoia and the type of depersonalized society depicted in sci-fi dystopias. In such a world, the concept of 'seeing is believing' would be outdated. We would require sophisticated tools, expertise, or third-party verification to trust any form of media. This will inevitably widen the gap between those who have the resources to discern truth from falsehood and those who don't.
The advent of a predominantly synthetic online landscape will drastically alter our relationship with information. In the face of constant misinformation, we’re likely to retreat further into our own bubbles, favoring info that aligns with our worldview and rejecting anything that challenges it. Anyone who has lived through the last decade of American society likely has an idea of what the repercussions could be.
Now, for the sake of balance, it's worth noting that deepfake technology optimists do see the potential for positive uses that extend beyond improved CGI or pretending that you’re Tom Cruise on TikTok. They argue that deepfakes could be used to create engaging historical content to augment education. In the entertainment industry, creators could use deepfakes to revive deceased actors for new performances or create convincing special effects (notably, this is kinda the very issue that has led the actor and writers’ unions to go on strike so...). It could also offer new tools for artists, filmmakers, and others working in creative fields, enabling them to create more expressive and engaging content. However, in my opinion, these are all incredibly weak justifications for a technology with near limitless downside.
Unfortunately, the magic of swapping a person’s face with that of another only lasts for about as long as it takes for you to think about how the technology will realistically be used. The most alarming application of deepfake technology, and to this point the main driver of the technology’s progression and ability to make revenue is and has been the creation of non-consensual explicit content, often called "deepfake porn" or "revenge porn." This is a disturbing phenomenon in which the faces of individuals, almost always women, are superimposed onto explicit images or videos without their consent - let’s be clear, this is not only unbelievably horrifying, it is the main reason why deepfake technology has progressed so quickly in the first place. Sound like hyperbole? It’s not...
Sensity AI conducted research in 2018 showing that of all of the deepfake videos on the internet, a staggering 90-95% of them are nonconsensual deepfake pornography involving either men or women, with over 90% of that group being women. It’s not at all an overstatement to say that the purveyors of these technologies have quite literally invented a new way to abuse and mistreat women. This degenerate use of deepfakes contributes to the ongoing culture of harassment, sexual exploitation, and sexism which has characterized the internet since its inception, and it cannot be overstated how devastating the psychological ramifications are for the victims. And it's not just celebrities or public figures who are at risk; anyone's image could be appropriated and misused in this way, leading to a gross invasion of personal privacy and potentially causing significant harm to both their personal and professional lives. If there’s even a single photo of you on the internet, you’re a potential victim.
In conclusion, while I can still see the cost-to-benefit analysis winning out for other forms of AI technology like LLMs and text-to-image generators, I fail to see how a technology with as many drawbacks as deepfakes can be justified by better CGI and the novelty of wearing someone else’s face in a TikTok video. This technology is built on the back of the mass-scale exploitation and abuse of women. It’s a technology which enables malicious actors to impersonate, vilify, and socially & psychologically devastate their enemies, a technology which empowers corrupt politicians to avoid culpability for their actions, that erodes the public’s trust in what they see with their own eyes and plunges us deeper into dystopia. Personally, while I’m certainly not one to jump to regulation as the answer to tech-based problems, I simply can not understand how this technology can be allowed to continue existing in its current form.
Got thoughts? Let me know down in the comments.
3 unique AI tools I’ve been digging into...
🔎 Zelta AI
The product - Zelta sifts through Gong call transcripts, support chats, and survey data, extracting key customer insights that would take human analysts hours to find. Zelta offers a comprehensive dashboard to identify customer trends and make data-driven roadmap decisions. You can even use its AI-powered chatbot to pull out specific insights with a simple conversational query. And the best part? You can directly pull data from platforms like Gong, Zoom, Fireflies, Google Drive, Intercom, and more.
The use case - Product teams seeking a deeper understanding of customer requests, challenges, and concerns will find Zelta AI indispensable. By segmenting data based on customer type, teams can cater their product to different needs and roadmap for different cohorts. And if the dashboard doesn't provide the answers? Simply ask their AI chatbot.
Listen to my interview with co-founder & CEO Pierce Healy here
⚙️ Alectio
The product - Alectio is the first full-stack MLOps platform for data-centric AI. It streamlines the process of preparing, curating, annotating, managing, and visualizing your training data. Using Alectio, you can identify the most informationally-dense records for efficient model training and save up to 95% of time and resources. It even offers an auto-labeling solution in combination with a marketplace of responsive labeling companies.
The use case - Alectio is perfect for data scientists and AI developers looking to cut down on labeling costs, reduce model training times, and uncover the data their model really needs for peak performance. By finding and using the most useful data, they can train better models faster and more efficiently. Plus, with the Auto-Pilot Labeling, they can turn around every row of data they label faster than ever.
My interview with the founder, Dr. Jennifer Prendki is dropping next Tuesday, so stay tuned!
🚀 Remyx
The product - Remyx is your personal assistant for crafting tailored AI for your applications. With its AutoMLOps, complex AI customization and deployment become simple. Its user-friendly chat interface allows anyone to create custom vision models, no code or data required. You can train and download a new model in just a few clicks or lines of code. Once done, your model is ready for swift integration into your app.
The use case - For developers and AI enthusiasts looking to personalize AI for their applications without getting lost in code and data, Remyx is an ideal solution. It simplifies the process, giving you a model that's ready to download and use, saving you time and effort.
Expect to see my interview with the co-founders soon!
Chronicles of the circuit circus
Scammers use AI to mimic voices of loved ones in distress - by Carter Evans & Analisa Novak. The big pull quote:
“Pete Nicoletti, a cyber security expert at Check Point Software Technologies, said common software can recreate a person's voice after just 10 minutes of learning it.
To protect against voice cloning scams, Nicoletti recommends families adopt a "code word" system and always call a person back to verify the authenticity of the call. Additionally, he advises setting social media accounts to private, as publicly available information can be easily used against individuals.”
Tech Leaders Warn the U.S. Military Is Falling Behind China on AI - by Vera Bergengruen for TIME. The big pull quote:
“The country that is able to most rapidly and effectively integrate new technology into war-fighting wins,” Alexandr Wang, the CEO of Scale AI, told lawmakers on a House Armed Services subcommittee. China is spending three times more than the U.S. on developing AI tools, Wang noted. “The Chinese Communist Party deeply understands the potential for AI to disrupt warfare, and is investing heavily to capitalize,” he said. “AI is China’s Apollo project.”
What future for journalism in the age of AI? - by Khaled Diab for Al Jazeera. The big pull quote:
“Just how fast AI is progressing and how deeply it is infiltrating the media was put on stark display recently when Germany’s largest tabloid, Bild, announced that it was laying off a third of its staff and migrating their functions to machines. This follows BuzzFeed’s decision in January to use AI to generate quizzes and its quiet experimentation with AI-generated content ever since, especially SEO-primed travel guides.
“The functions of editor-in-chief, layout artist, proofreader, publisher and photo editor will no longer exist in the future as we know them today,” Bild’s editor-in-chief said in an email to staff.”
Signoff
Thanks so much for joining me on another Future of Product! I know this week’s topic was a little less up beat than usual, but I really feel passionately that deepfake technology is a major problem and only becoming more so. As always, if you disagree or want to share your own take, feel free to drop your thoughts down in the comments, I’m always open to discussion.
I look forward to seeing you next week!