the documentary Welcome from Chechnya, from 2020, addresses a difficult topic: the persecution of the LGBTQIAP+ population in Chechnya. There are not few reports of torture and even concentration camps exclusive for gay men in the Russian republic.
In a context like this, it is of paramount importance to guarantee the safety of the people who risked appearing in the film. Part of that involves hiding their faces. In this sense, producers could hide the interviewees in shadows and distort their voices, or even blur the image to protect the sources. However, they opted for a different path: the deepfake🇧🇷
Technology, closely associated with pornography and misinformation, has found here a very peculiar use. And noble. The production selected activists willing to borrow their faces to Chechen men and women at risk. When they appear on screen, it is with this mask preserving their identities.
This is one of the positive examples of using deepfake. It helps to reinforce that this technology does not only live from harmful applications. In fact, since its origins, deepfake has been thought of for absolutely normal purposes, far removed from those that earned it the bad reputation it has.
A brief history of the deepfake
There is no single creator behind deepfake. It is a field that has evolved alongside academic research with artificial intelligence and machine learning. The first solid example of what the technique could generate is more than twenty years old: it is the Video Rewritereleased in 1997.
The program, developed by the researchers, managed to modify videos from new audio excerpts, creating versions in which the filmed person said different things from what he had said originally. It’s the basic principle of deepfake, as we can see.
But what were the intentions behind this experiment? Dubbing and special effects are cited as some of the possibilities. That is, absolutely reasonable goals. Nothing like the videos in which the faces of famous actresses are placed in pornographic contexts.
This would become an issue several years later. This is because, after Video Rewrite, there was not a great diffusion of the technology. The quality of the results was still average, to say the least. It was like this until 2014, when the GANs (Generative Adversarial Networks).

It is a system in which two neural networks fight each other🇧🇷 The first is responsible for creating variations based on a starting point, and the second for evaluating them and certifying that they are false. But this occurs in thousands, sometimes millions of variations. When the second network does not notice the forgery — the artificial creation of the artifact in question — then there is the winning object.
In the case of deepfake, this object is the fake video or photo, sometimes so convincing that the human eye has difficulty perceiving. And it was from the use of GANs that the results became more and more impressive. And that use in pornography and disinformation has begun to draw attention to the destructive potential of this technology.
Eternalizing unforgettable voices and faces
But deepfake is not stuck in the realm of absurdity. We already mentioned dubbing as another possibility it opens up, but we can go further. Fernando Rodrigues de Oliveira, better known as Fernando 3Dpointed out several paths in the Tecnocast 268🇧🇷 He is one of the founders of FaceFactory🇧🇷
Cinema is one of the fields that can most benefit from deepfakes (or “synthetic media”, a term that 3D prefers). Reshooting a scene becomes possible even without the physical presence of the actor or actress, for example. Furthermore, the demand for rejuvenating certain characters is expected to increase with the continued evolution of technology. Just note the appearances of a young Luke Skywalker in series of Star Wars🇧🇷
The application in cinema can reach levels close to science fiction. Recently, a rumor has emerged that Bruce Willis had sold his image to a Ukrainian company, which would keep him “acting” through deepfake after his retirement. The news was later denied, but it is not absurd to imagine a future (perhaps near) in which cinema icons will never disappear from the screens. According to Fernando, it is possible.
And the same goes for voices. James Earl Jones, iconic actor who voiced Darth Vader, your voice will be reproduced in future movies and series thanks to artificial intelligence. Fernando 3D calls the technology deepvoiceand she would keep iconic voices among us.
![Star Wars is one of the franchises that can benefit from deepfake [Crítica & Fãs] / Disney+ / Disclosure](https://files.tecnoblog.net/wp-content/uploads/2020/11/Star-Wars-no-Disney-1-e1605826099663-1060x597.jpg)
This even includes voice actors🇧🇷 An example cited by Fernando is Isaac Bardavid, official voice of Wolverine in Brazil, who died this year. Using the various samples of your voice available in movies and cartoons, the AI would be able to learn it and apply it to the voice of another voice actor.
How can we bring Isaac back? A voice actor will do the voice part, which we call the guiding voice, (…) and then we will fit Isaac’s voice into the current voice actor’s guiding voice.
Fernando 3D, co-founder of FaceFactory
It remains to be seen whether the industry will embrace the idea. Maybe it’s simpler and more cost-effective to just find a talented new voice actor. But it’s not crazy to imagine a future where these remarkable voices remain among us. After all, nostalgia is an important asset in entertainment.
Ultimately, what these examples show is that there is room to use deepfake in reasonable and creative ways. After all, no technology is fatally destined for evil; human creativity can also be applied to good things.
https://tecnoblog.net/especiais/o-deepfake-prova-que-nenhuma-tecnologia-e-100-boa-ou-ruim/