Deepfakes are becoming increasingly realistic and a challenge that will continue to impact politics and media, among other industries. Deepfakes are artificial intelligence-generated videos or images that mimic real people, and reports indicate that their number has increased from half a million in 2023 to 8 million in 2025. In 2026, they are fooling experts and everyday users alike, which has increased fears of a post-truth world where nothing is verifiable. The purpose of this post is to explore the impact that deepfakes will have on our shared reality.
Why Are Deepfakes Increasingly Common?
The rapid expansion of AI, particularly in terms of video generation models, has made deepfakes easy and cheap to create, even for those without access to professional-level software. Previous iterations of this technology struggled to keep an image consistent between frames, but now a person who is being mimicked will look the same throughout the video, making it much more difficult to detect. Another issue is that the amount of input information that is needed to create a deepfake has also decreased, meaning that almost anyone with an online presence can be cloned. One short video on YouTube, for example, or a short phone call with a scammer collecting information on your voice is enough to generate a relatively convincing deepfake.
The Problem Deepfakes Solve and the New Ones They Create
Deepfakes promise an increase in creative opportunities that could be used to create new pieces of art, like resurrecting deceased celebrities or creating comedic pieces. Another potential beneficial application is using the likeness of a celebrity for public information campaigns, as this has been shown to increase trust in a message in some countries. Proponents of deepfakes have also suggested the use of the technology to create mental health support avatars that can act as a therapist to a person in need.
Despite this potential, one of the most common uses of deepfakes appears to be in scams, as perpetrators can clone voices from seconds of audio. In one case in Hong Kong, a scammer cloned the likeness of an executive and fooled one finance worker into wiring $25.6m after just one video call. Retailers have reported receiving over a thousand AI scam calls on a daily basis, many of which are extremely realistic.
Another issue with deepfakes is their potential to erode epistemic trust, which is particularly important in industries like journalism and politics. Indeed, even the awareness of deepfakes has already led to the erosion of trust through the creation of a liar’s dividend, where leaders can dismiss real scandals that impact them as fakes. A survey across eight countries has shown that prior deepfake exposure increases the audience’s belief in unrelated misinformation, especially on social media.
Real World Harm
Deepfake fraud hit record levels in 2025, with deepfake-as-a-service programmes allowing anyone to create a fake individual that can then be used to target victims. In politics, deepfakes influenced the 2024 U.S. elections through robocalls that mimicked then-President Joe Biden, leading to some states like Montana mandating disclosures. Experimental studies have also shown that deepfakes of infrastructure failures, like a collapsing bridge, have increased distrust in the government in the United States. This suggests that even the existence of deepfakes more generally can lead to people questioning the news media even when the story is correct, which in turn leads to a reduced trust in the government and those in our communities. This will likely have significant impacts on elections around the world as these technologies become more common and easier to use.
Another issue is the development of non-consent in deepfakes, with 90-95% being pornographic and the majority targeting women. Grok AI, which was developed as part of the social network X, has recently stated that it will stop allowing users to generate pornographic deepfakes after the company was hit by a scandal. Grok AI was generating these images even in jurisdictions where it is illegal. There are ongoing challenges in addressing this issue, and Grok AI appears to be creating extremely graphic imagery in the standalone application, regardless of the legal status of these images. UNESCO has noted that this type of deepfake can be seriously harmful to the victim and cause reputational damage even when the individual can prove that they are fake. Again, it is the existence of deepfakes in general which can cause the erosion of trust in whether someone else is telling the truth, even when that person has been significantly harmed by the existence of a deepfake.
Who Wins, Who Loses?
One of the challenges of deepfakes is that it is incredibly hard to police, particularly as AI tools become cheaper and easier to access. The power has shifted towards creators who may be bad actors with access to cheap tools like OpenAI’s Sora 2 or Google’s Veo 3. Deepfakes can be detected by high-power entities like banks, who often use multi-factor checks, but individuals may not have access to these tools and risk becoming victims of fraud. Governments and Big Tech retain significant amounts of power, especially when compared to the excluded voices like the victims of intimate deepfakes. Many of these individuals are still awaiting justice as the tools evolve more quickly than the legal system.
A Future We Can Change
Generally speaking, the vast majority of the public think that deepfakes pose more risks to society than benefits, including fraud and increased distrust in institutions, but do not act on the issue in an even way. There are ongoing questions about what we can do differently, including teaching critical consumption in schools which will allow more people to be aware of how to detect deepfakes. The truth has not been fully destroyed yet, but there is a need to prioritise verification approaches before deepfakes become fully integrated into society.ney.
Leave a comment