In a recent study, University College Cork recruited 440 participants to watch altered deepfake versions of movies such as The Matrix reimagined with a deepfake of Will Smith, Indiana Jones with Chris Patt, The Shining with Brad Pitt and Angelina Jolie. Along with that, participants also watched clips from the real remakes of movies such as Charlie and the Chocolate Factory, Total Recall, and Carrie. Additionally, some participants were also given written descriptions of fake remakes.
After their experiment, the researchers found that almost 50% of participants claimed to recall the altered version of movies with deepfake remakes being released in cinemas. In fact, many of the participants thought these altered movies were better than the original ones. This provides further evidence for the argument that people are more likely to believe something such as a deepfake at face value, when it supports their existing beliefs. This points to a larger challenge in countering the spread of false information and the potential misuse of deepfake technology.
While educating individuals about the existence and potential impact of deepfakes is certainly important, it's clear that relying solely on education may not be sufficient. Developing tools and technologies that can effectively detect and flag deepfake content is essential.
DuckDuckGoose believes that collaboration among tech companies, researchers, policymakers, and the wider public is key to tackling the challenges posed by deepfakes and misinformation. By combining efforts to raise awareness, develop detection tools, and encourage responsible use of technology, we can strive to create a digital landscape where individuals can navigate content with greater confidence and discernment.
A group of researchers at University College London recently played audio speech samples to 529 participants to determine if they could identify the real speech from the synthetic speech. They found that participants could only identify the synthetic speech 73% of the time. After teaching participants how to identify aspects of synthetic speech, this number only increased slightly.
What this tells us is that even after being taught to, we as humans are not able to consistently detect and identify whether or not audio is synthetically generated or not. This will be especially true when we are merely at the beginning stages of synthetic speech. Not long in the future, we will find that it will only be more and more challenging to detect synthetic speech.
At DuckDuckGoose, we understand the severity of the synthetic speech in which there is a growing potential for impersonation and misinformation that these AI voices can create. That's why we've developed our Generative AI Detection, which can distinguish between real recordings and synthetically generated AI audio. It's designed to combat the fraudulent use of AI voices, providing an essential layer of security in today's digital landscape. You can learn more about it here.