when i heard about deepfake For the first time, I was fascinated by the idea of an artificial intelligence recreating a person’s facial expressions on video. But it soon became clear that the technology could be dangerous. Deepfakes have been used for political manipulation, for example. If so, is it possible to find out when a video is fake?
The problem becomes more serious when we consider that there are several types of deepfakes. The best known use the image of a person in order to create a video in which he makes a statement that, in fact, never existed. But others create fake profiles on social networks or simulate voices (deepvoices), for example.
Regardless of the type, the risk of spreading false content or fake news through the technique is great. Find out, in the next few lines, what you can do to guard against these pitfalls.
Let’s take a quick definition: deepfake is a technique that uses deep learning (a concept linked to artificial intelligence) to manipulate a person’s facial expressions or statements in photos and videos.
In most cases, this manipulation is so well done that it is difficult to identify a deepfake. This is dangerous. If, on the one hand, it is fun to imagine the Mona Lisa (from a painting by Leonardo da Vinci) speaking in a video using the technique, on the other hand, deepfakes can be used to spread fake news on social media, among other problems.
There’s a good reason for this: mechanisms that generate deepfakes have evolved rapidly. To give you an idea, neural networks of the GAN type (you will find out what this is in this text) have been shown to be very effective in this type of activity.
Allied to voice cloning techniques, video manipulations based on deepfakes have been increasingly frequent on social networks or services such as WhatsApp or Telegram. Many of these fake videos are aimed at embarrassing people or attributing false lines to political figures.
Here’s a deepfake example showing the president of Ukraine surrendering to Russia. Another, in the image below, puts actor Tom Cruise’s face on someone else.

Tips for spotting deepfakes without being an expert
Fernando Rodrigues de Oliveira, better known as Fernando 3D, created the faceFactory alongside Bruno Sartori. The company applies artificial intelligence to “good deepfakes” projects, for use in dubbed films, for example. So much so that they prefer to call the technique “synthetic media”.
Regardless of the technical term, the fact is that Fernando understands the subject. In our conversation, he explained that deepfakes “learn” over time, after all, they are based on artificial intelligence. This is the reason why it is increasingly difficult to identify a video manipulated through the technique.
Repair eyes and mouth
Despite that, there are some tricks you can use to discover a deepfake, says Fernando. One of them is to observe the look of the person who appears in the video. “The eye can get a little lost,” he explains. It is as if the individual is not focusing on an object or the camera.
Also according to Fernando, noticing the movements of the mouth or face itself also helps. In this process, you may identify movements that are unnatural, such as an exaggerated opening of the lips to pronounce a certain word.
Put that way, it sounds easy. Is not. Fernando warns that today’s deepfakes are much more accurate than those created three or four years ago because of the evolutionary nature of artificial intelligence. That’s why our best defense today is paying attention to the details.
Attention to detail to identify deepfake
To summarize, here is a list of features that the Kaspersky puts them as suggestive for fake videos (note that some of them match Fernando’s tips):
- sudden movements
- scene lighting that changes from frame to frame
- change in skin tone
- weird blinking eyes or no blinking
- lips poorly synchronized with speech
- digital elements (artifacts) in the image

Is there software that detects deepfake?
It would be great, but not yet. Not that it works well. There are already works in this direction, however. Fernando cites, as an example, a Facebook survey to detect deepfakes via reverse engineering. But this is a project that cannot be used yet.
One of the reasons for such difficulty lies in the so-called GANs (Generative Adversarial Networks). In this approach, one system “duel” another to generate the most convincing deepfake possible.
Briefly explained, a GAN works like this: a tool based on neural networks generates thousands or even millions of modifications of the image of a face and submits them to a second mechanism that tries to find out if the sequences are fake. If the latter does not come to a true conclusion, the deepfake is generated.
As the interaction between GANs is fast and involves many attempts, identifying a “digital clone” is an increasingly complicated task. But not impossible. If you follow the above guidelines, you will have a good chance of success.
Meanwhile, research is trying to find ways not only to identify fake videos, but also to prevent their creation through validation techniques. One study even considers the use of blockchain for this purpose, as the technology works in a decentralized manner and is quite effective against tampering.
For us, mere humans, Fernando 3D gives an extra tip to identify deepfakes: leave the distrust meter always on. If the video shows a person making a very strange statement, one who is out of character or is blatant, it is best to seek more information about it before jumping to conclusions.
https://tecnoblog.net/responde/como-identificar-deepfake-para-fugir-de-informacoes-falsas/