Even if you think you’re good at analyzing faces, research shows that most people can’t tell the difference between real-life facial images and computer-generated images.
This is especially difficult since computers can create realistic images of non-existent people.
Recently, a fake LinkedIn profile with a computer-generated image made the news because it successfully connected with US officials and other famous people on the Internet, for example. Experts also say that spies often create fake profiles and such images to infiltrate foreign media outlets.
These deep lies are becoming more and more prevalent in everyday culture which means people need to be aware of how they are being used in advertising, marketing and social media. These images are also being used for nefarious purposes, such as political propaganda, espionage and information warfare.
Creating them involves something called a deep neural network, a computer system that mimics the way the brain learns. These are “trained” by exposing them to large datasets of real faces.
Instead, two deep neural networks are created against each other, competing to create real images. As a result, the final results are called GAN graphs, where GAN stands for Generative Adversarial Networks. This method creates new images that are not familiar with the training images.
In our study published in iScience, we showed that the inability to distinguish these artificial faces from real ones has implications for our online behavior. Our research shows that false images can undermine our trust in others and change the way we communicate online.
My colleagues and I found that people judged GAN faces to be more attractive than real pictures of real people. Although it is not clear why this is so, this reflects recent advances in the technology used to create artificial images.
And we also found an interesting link to attractiveness: faces that were rated as unattractive were also rated as realistic.
Unsightly faces can be considered unusual and their appearance can be used as a benchmark against which all faces are evaluated. Therefore, these GAN faces can look real because they closely resemble mental templates that people build from everyday life.
But seeing these artificial faces as real can also have an effect on the level of trust we develop in a group of unknown people. – a concept known as “social trust”.
We often read too much into the faces we see, and the first impressions we make guide our interactions. In a second experiment that formed part of our current research, we found that people were more likely to trust information provided by faces they had previously seen as real, even if they were artificial.
It’s no wonder people trust faces they believe to be real. But we found that trust disappeared when people were made aware of the presence of artificial faces in online transactions. They then showed lower levels of overall trust – regardless of whether the faces were real or not.
This could be seen as useful in some ways, as it raised suspicions in areas where fake users could operate. However, in a way, it can gradually affect the way we communicate.
Often, we tend to act on the assumption that other people are honest and trustworthy. The growth of fake and other online profiles raises the question of how much their presence and our knowledge of them can change the state of “unchanged truth”, and then lead to distrust.
Changing our settings
The transition to a world where what is real is indistinguishable from what is not is also likely to change the culture from reality to fantasy and illusion.
If we are constantly questioning the authenticity of what we encounter online, it may require us to re-use our minds from processing the messages to identifying the sender. In other words, the proliferation of real, but fictional, online content may require us to think differently — in ways we never expected.
In psychology, we use a term called “reality monitoring” in which we accurately determine whether something is coming from outside or inside our brain. The advancement of technologies that can produce fake, but realistic faces, images and videos means that the analysis of reality must be based on data rather than our judgment.
It also requires a wider discussion on whether people can still pursue the truth.
It is important that people pay more attention to digital faces. This may include using reverse image searches to determine whether images are genuine, being wary of social media profiles without personal information or follower counts, and identifying the potential for deepfake technology to be used for abuse.
The next frontier should include advanced algorithms to detect fake digital faces. This can be posted on social media to help us distinguish between real and fake when it comes to new faces.
Manos Tsakiris, Professor of Psychology, Director of the Center for the Politics of Feelings, Royal Holloway University of London.
This article is reprinted from The Conversation under a Creative Commons license. Read the first article.