Anyone who has seen The Mandalorian is familiar with deepfakes. The realistic (almost to the point of being eerie) young Luke Skywalker looks like Mark Hamill taken straight from the 1970s. Although not perfect, the technology behind the young Luke and deepfakes as a whole is quickly developing.
Deepfakes is a term used to describe a variety of computer-generated images, usually of people. They are made using two neural networks, which are systems of machine learning that are “trained” using examples in order to learn to perform a specific task. In this case, both networks are trained with a variety of human faces of many different ages, races, and gender. One network, called the “generator”, produces a group of synthetic faces. These faces are then sent to the other network, the “discriminator”, which compares the generated face to real human faces in order to determine which look realistic. The discriminator then feeds this information back to the generator, which is then able to produce more realistic outputs. Researchers found that over time, the discriminator was unable to distinguish between real and generated faces.
Previously, deepfakes usually fell into the “uncanny valley.” This refers to the faces being almost human, but not quite, giving an appearance that looks almost robotic and synthetic. However as technology has advanced, synthetic faces have become more realistic, even at times rivaling real people in their similarity.
A recent study details this, showing that faces that are synthesized by AI are not only unrecognizable from humans but that people tend to find them more trustworthy. The study separated its participants into groups, one that was asked to determine which was the real and which was the synthetic face, a second that received training on how to identify the faces and then was asked to do the same, and a third group that was asked to rate the trustworthiness of synthetic and real faces. The study found that people were able to distinguish the real faces from the synthetic ones about 48 percent of the time, which improved a mere 10 percent after training. The third group overall rated the synthetic faces as slightly more trustworthy than the synthetic ones.
The researchers suggested monumental advancement in technology with neural networks as the primary reason for their results. As for trustworthiness, humans tend to see symmetrical faces as more trustworthy, or even more attractive, which AI faces are able to replicate more closely than our genetics can. This can be so accurate that it can counteract the uncanny valley effect, even fooling people who have been trained to identify synthetic faces.
Scientists in the field are quick to point out that there are many problems that could arise from the advancement of deepfake technology. Uses like The Mandalorian are harmless, but this technology could easily be used for darker purposes. The replication of an important person or politician could easily be used for political misinformation, fraud, or blackmail. Humans are experts at facial recognition and designed to see faces everywhere, but if we are no longer able to tell real from machine-generated faces, many foreseeable problems can arise.
As technology progresses, more research is being done to make sure we have foolproof ways of distinguishing real from synthetic faces. One such idea includes adding watermarks to synthetic faces so that they can be identified later as such, solving for the problems this new technology may bring along with its benefits.
OOOOOOOO this is so cool!!!