Four things you can do to fight against deepfakes
Video and audio content synthesised by artificial intelligence is no longer a thing of the future, and it may well be one of the major online threats of the coming years. Nevertheless, there are steps internet users can take to help counter this emerging technology.
Earlier this year, the FBI released a warning stating that malicious actors “almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12 to 18 months.” Most of us know synthetic content under the term “deepfake” — a blend of the words “deep learning” and “fake.” It is a form of artificial intelligence (AI)-generated synthetic media that first emerged on internet forums in late 2017 and have since found their place predominantly in entertainment.
Nowadays, you’ll mostly come into contact with deepfakes created and shared for comedic purposes — many of us have, at some point, downloaded one of the many apps on the market that swap the face of our favourite movie stars or superheroes for our own. And even though the results can sometimes be eerily realistic, we share them light-heartedly with our friends.
Although some deepfakes are quite easily spotted and are not malicious in intent, the technology is advancing to the point where some deepfakes cannot be discerned from authentic videos with the naked eye. Currently, users are more likely to be deceived by information whose context has merely been altered — in other words, fake news — rather than information that has been completely synthesised, the FBI report reads.
This, however, is bound to change as AI and machine learning (ML) technologies become more sophisticated and convenient.
Four troublesome heads
Deepfakes are essentially a new type of fake news — content created to misinform or deceive readers deliberately. Fake news is nothing new; it has always been around. The main difference is that now, owing to the speed at which AI-based tools can encourage and facilitate the sharing of information online, false stories can spread faster, at a larger scale and consequently do more harm.
Similarly to fake news, the actual manipulation of images and videos has been around for decades: for example, Stalin notoriously had his political enemies erased from photos. There are examples of special effects used in video dating back a century, such as the famous silent film from 1898 by Georges Méliès, The Four Troublesome Heads, where he used a multiple exposure of objects on a black background to create the impression of detached heads.
Today, you have the tools to create deepfakes right in your pocket. A wide range of free apps is available on the market that allows users to superimpose celebrities and movie characters. Although most of us could spot the 1898 special effects as easily as recognising our friend in a Maleficent Halloween costume, professional deepfakes today are made to be as convincing as possible — at times very successfully. For example, a number of videos emerged this year impersonating the American actor Tom Cruise, and the results are impeccable.
The emerging world of deepfakes
What is the technology behind this emerging trend? Companies working with deepfakes can utilise a type of ML method, a subset of AI, called Generative Adversarial Networks, or GANs. The method sets two neural networks in competition with each other, where one is the generator and the other the discriminator. In other words, one neural network learns to generate deepfakes, while its counterpart learns to discern whether the image is real or fake — like a game of cops and robbers where both sets continuously improve themselves against the other.
“After millions of attempts, the discriminator becomes better and better at recognising deepfakes and is able to reach unexpected results,” shares Martin Černý, head of the global partner network at Cogniware, a Czech company that has been developing specialised software called Insights for national security forces to assist in criminal investigations of fake news. Their focus is text-based misinformation, but they have recently worked on a bespoke project to develop a solution for identifying deepfake videos.
If you want to know how convincing the technology can be, watch this demonstration in an educational segment with Czech Television, where they simulated an impressive imitation of the TV host, Daniel Stach.
However, deepfakes do not only consist of video content, and audio deepfakes are becoming increasingly frequent and dangerous. In whichever form, they have the potential to be misused by foreign and criminal cyber actors, as the FBI warned in March 2021. Their motivation? Spear phishing, ransomware, generating fake social media profiles, or spreading malicious narratives.
Protect your data, protect yourself
To acquaint Czech society with these new threats, a local startup, DataSentics, collaborated with the Czech Television on Futurerto, an educational series for children, illuminating the dangers of fake news and deepfakes. Such efforts are instrumental in helping people protect themselves, as Jakub Trusina, one of the data scientists at DataSentics, shared with us: “It is important to raise awareness of these technologies and to remind ourselves that we have to be cautious of believing media content. This includes videos, no matter how realistic they may seem. We always have to look at it in a wider context.”
We can also protect ourselves by being conscientious in minimising the amount of personal content we share online, pointed out Martin Černý from Cogniware. “The more photos and videos we share online, the more data we provide malicious actors to work with to make accurate deepfakes — and this applies equally to other forms of fake news,” he said
This does not mean that you have to delete your favourite face-swap app immediately. But it does mean that all media consumers need to become more aware of the potential dangers that AI and ML-generated content can bring both to the individual and society.
Many companies worldwide are actively working on helping address the challenges posed by these technologies, whether that is by developing increasingly better detection algorithms helping us spot fake content accurately, or by producing educational content and raising awareness. Although most deepfake videos are still not precise enough, there are many realistic examples to be found pointing to what this technology might look like in the near future.
How to behave online?
Navigating an online world full of hoaxes may seem daunting, but remember that there was once a time when analysts recommended users to cancel e-mail communication as the only way to end spam. Although spam remains a problem, we have learned to bring its risks down to a minimum by virtue of smart algorithms and wide-spread awareness.
So, how exactly can I protect myself deepfakes?
Restrict the content that you post online
To make a realistic deepfake, someone has to train the algorithm with hours and hours of real video or audio footage. Therefore, the best thing you can do to ensure someone can not make a deepfake of you is to minimise the amount of videos and photos you post on your social media accounts, and moderate your privacy settings.
Maintaining awareness of recent technological advances goes a long way, as does the understanding of the challenges these technologies pose.
Verify your sources
Make sure you use high-quality sources and if in doubt, verify the information on various reliable platforms. At the moment, healthy skepticism is warranted when it comes to content that has surfaced from unknown origins.
Equip yourself with reverse image search
Reverse image search can potentially help you spot a fake video or social media profile without having to get too technical. If you’re lucky, you may find similar images online, revealing the true context of the content you are investigating.
Next up from prg.ai
Typical Prague AI firm is young, self-sufficient, and export oriented, shows our new comprehensive study
130 companies, 11 interviews, 9 business topics. Explore all that and more in the unique study authored by prg.ai, which contains an overview of last year's most notable events on the local AI scene or articles on the future of AI or gender equality in research.
prg.ai newsletter #33: Prague sees AI make waves in education and media
In May, discussions and events focused on artificial intelligence took off in Prague. Through our education and outreach activities, we have successfully engaged different sectors of society in understanding and harnessing the potential of AI. The Elements of AI online course celebrated its fifth anniversary, also reaching a significant global milestone. And AI was given its fair share of space in the Czech media, underlining its growing importance in everyone's life.
Models within a model: in-context learning pushes boundaries of LLM
Large language models, better known by the acronym LLM, have recently been at the forefront of the AI community's mind. A chatbot capable of communicating convincingly, precisely, and surprisingly "human-like" with a human has considerable potential. LLMs are not a novelty. We are just finding new and new uses as their capabilities improve. That is, quite dramatically in recent months.
prg.ai newsletter #32: Prague AI scene gears up for an eventful spring
This month we have not one, not two, but three special invites for you – alongside a full catalogue of community events, insightful opinion pieces about where the industry is headed, and innovative new AI use-cases.