Video games are full of amazing characters, but creating them is an incredibly difficult task. Designers have to control a huge amount of variables, and the smallest detail can destroy the natural look of their character. One of the main problems they have to face is the eerie feeling elicited when a realistic virtual human starts showing unexpected attributes. An unusually high pitch voice or an exaggerated smile don’t just break the illusion of realism, the make the user feel unease and, sometimes, scared. While important, the problem hasn’t necessarily paralyzed progress in the field. Designers have manage to control the effect pretty good and have even used it in the creation of horror movies and games. The subject has implications that go far beyond the entertainment industry, affecting the development of robots and prosthetic limbs. This is why scientists from different areas have started studying the subject and they have come out with some very interesting findings.
In a series of studies, researchers from the University of Bolton analyzed if different levels of audio and video asynchrony as well as facial expressiveness were capable of producing a feeling of eeriness in the user. They did this by creating models that were similar in any way except for one of the mentioned attributes, and then asking participants to rate how familiar (as opposed to strange) and human they looked.
First they designed an experiment to see how the level of synchronization between a character’s movement and voice affected the way it was perceived. 113 male undergraduate students were exposed to video clips showing a human or a virtual character (Barney from Half-Life 2) speaking. In each clip, the voice was synchronized or came either 200 or 400 milliseconds before or after the movement. Participants had then to rate each video for familiarity and human likeness. Results showed that, regardless of the character type (human or virtual), the level of asynchrony had a significant negative effect on how familiar and human-like he was perceived. This effect was significantly stronger when the voice came before the movement, than when it came after (Tinwell, Grimshaw & Abdel Nabi, in press). One possible explanation is that asynchrony forced the listener to decode the speech from both the voice and mouth movement at the same time, making it harder to understand. This could have made the character behavior look unnatural, and therefore, unsettling. It’s also possible that the interference was greater when the sound came before the video, increasing the effect on uncanniness. It’s important to notice that the mentioned relationship was only present for values of ±200 milliseconds, disappearing at a distance of ±400 milliseconds. This suggests that asynchrony might affect familiarity and human likeness of a character, but only as long as it stays within a specific range (Tinwell, Grimshaw & Abdel Nabi, in press).
While the previous study suggests that asynchrony between voice and mouth movement could be used to make a character feel eerie, developers planning to use this technique should also remember how irritating desynchronized audio can be for the user. Still, more recent studies seem to have found more practical ways to provoke an uncanny feeling.
In 2011, the same team of researchers analyzed how exaggerated facial expressions affected the recognition and perception of different emotions. They did this by presenting 40 male university students with footage of either a real human or a virtual character (again, Barney) expressing different emotions: Anger, disgust, fear, happiness, sadness, surprise or neutral. The virtual characters showed normal or exaggerated levels of expressiveness. After watching them, participants had to match a list of the referred emotions to each clip. They also had to rate the characters for familiarity and human likeness. According to the results, anger was perceived as significantly more familiar when facial expressions were exaggerated. This shouldn’t come as a surprise as more intense facial features just tend to increase the intensity of the anger expressed, making it easier to recognize. Happiness, on the other hand, was rated as significantly less familiar and human-like than all the other emotions. This suggests that it is a particularly difficult emotion to portray. In the case of the virtual character, this could have something to do with the limited amount of polygons available and how this decreases the system’s capacity to create the wrinkles and bugles that should appear around the eyes. This means that, while the eyebrows remain almost still, the character is featuring a normal smile. A way to solve this problem could be to reduce the intensity of the smile, at least until it matches the expressiveness of the upper facial region. This limitation could also explain why another emotion that relies heavily on the eyebrows, like disgust, was perceived as significantly less human-like in the virtual character condition, in contrast with the human (Tinwell, Grimshaw & Abdel Nabi, 2011).
The study shows us how the dependence of each emotion on different facial cues (like eyebrows or the smile) explains why the effect of an increase in the intensity of the expression is different in each case. But, what is more important, it teaches us that altering some of the facial features involved in the portrayal of an emotion could be enough to provoke an eerie feeling.
To further understand the relationship between facial expressiveness and uncanny characters, the team designed an experiment to study the effects of a reduction in facial movement. 129 male university students were exposed to videos showing either a real human or a virtual character expressing anger, fear, disgust, happiness, sadness, surprise or neutral emotions. The virtual character was either fully animated or showed a lack of movement in the upper part of his face (eyebrows and eyelids remained still). After watching the videos participants had to rate each video for familiarity and human likeness. They were also asked to select which of the mentioned emotions matched the one portrayed by the model. Results showed that when eyebrows and eyelids remained still, characters expressing fear, sadness, disgust and surprise felt significantly less familiar and human-like (in other words, eerie). In the case of anger, however, the lack of upper facial movement provoked a significant reduction only in familiarity (Tinwell, Grimshaw, Williams & Abdel Nabi, 2011). This means that, while the character looked non-human, it did not elicit a feeling of strangeness (like a friendly looking puppet or robot). Happiness was the only emotion for which the lack of facial movement had no significant effect on either familiarity or human-likeness. This could suggest that both eyebrows and eyelids are not that relevant for the expression of this emotion. Another explanation could be that the role of the smile is so important, that it overshadows the influence of any other facial features (Tinwell, Grimshaw, Williams & Abdel Nabi, 2011).
The study allows us to conclude that the absence of movement in specific parts of the face can, depending on the emotion portrayed, make the user experience an eerie feeling. Again, this seem to have something to do with how each emotion is associated with different facial cues. Overall, research indicates that changes in the predominance of the features involved in the expression of an emotion can, in some instances, provoke a feeling of uncanniness. But why does this happens?
Tinwell argues that a failure to properly assess the emotion of the others prevent us from accurately predicting their behavior. This turns them into a potential threat, triggering the uncanny feeling (Tinwell, Grimshaw, Williams & Abdel Nabi, 2011). To prove their hypothesis the team designed an experiment that focused on how the quality of the facial expression affected the detection on personality traits associated with unstable behavior. To achieve this, 205 male and female undergraduate students were exposed to videos from a human (a man or a woman) or a virtual character (Barney or Alyx from Half-Life 2) screaming. The virtual characters showed either full animation or a lack of movement in the forehead, eyebrows and eyelids. Participants then had to rate the models for eeriness, nonhuman-likeness, repulsiveness, unattractiveness, un-likeability and unresponsiveness. They also had to indicate whether the models showed any of 6 personality traits associated with psychopathy (angry, cold personality, dominant, uncaring, unconcerned and untrustworthy) and 6 negative traits not associated with the disorder (anxiety, shame, depression, hopelessness, nervousness and self-consciousness). According to the results, male models were perceived as significantly more eerie than females (at least in the human and fully animated conditions). This is congruent with the fact that psychopathy is more frequent among men. It’s possible then that participants shared this belief and saw men as a more probable source of danger (Tinwell, Abdel Nabi, & Charlton, 2013). The absence of this effect for partially animated characters, however, could indicate that upper facial movement was necessary for this tendency to manifest. The most important result was, nevertheless, that the perceived presence of psychopathic traits made the characters feel significantly more eerie than negative traits not associated with the disorder. In other words, facials expressions that made the character look more psychologically unstable promoted an uncanny feeling in the user (Tinwell, Abdel Nabi, & Charlton, 2013).
The previous results supports the hypothesis that a reduction in the capacity to predict someone’s behavior make him look like a potential threat, provoking the feeling of uncanniness (Tinwell, Abdel Nabi, & Charlton, 2013). But there could be another explanation. From an evolutionary perspective, identifying a threat is important; but recognizing when an apparently irrelevant stimuli is being used to hide a source of danger could be even more significant, as it would indicate that we have fallen victims of a trap (like a predator hiding behind moving foliage). This could explain why everyday objects or individuals that are partially recognized but fail to fulfill the requirements for complete identification tend to provoke an eerie feeling.
The reviewed studies helped understand how and why the experience of uncanniness occurs. Still, the limitations of these experiments must be addressed. The authors themselves, for example, have stated how the term “familiarity” (used by them as an antonym for strangeness) could be confused with “popularity”. This way, characters being perceived as strange could still be rated as highly familiar, depending on how famous participants thought they were (Tinwell, Grimshaw & Abdel Nabi, 2011). Another issue could be the use of self-reports as a method to measure the subject’s experience. Although it’s widely used, the technique depends heavily on the individual´s capacity to accurately understand and describe their own feelings. The problem becomes even more complicated when the emotion assessed is one as difficult to define as uncanniness.
Limitations aside, it’s undeniable how much more we know now about the psychology of uncanniness. Unpredictable behavior and hidden intentions appear to be among the most probable causes. In other words, signs of a threat lurking in the shadows seem to be a very reliable trigger for eeriness. Which takes us to the following question: Can this principle be applied to less human characters? Could we create uncanny dolls or pets for our fear inducing movies and games? Can we manipulate music, sound and lighting in order to make the viewer or player feel even more unease? The obvious answer is that we need more research. The good news is that, as we have seen, there are groups of scientists willing to dedicate their time and expertise to investigate this fascinating subject.
Tinwell, A., Abdel Nabi, D. & Charlton, J. (2013). Perception of psychopathy and the Uncanny Valley in virtual characters. In: Computers in Human Behavior, vol. 29, no. 4, pp. 1617-1625
Tinwell, A., Grimshaw, M. and Abdel Nabi, D. (in press). The effect of onset asynchrony in audio-visual speech and the Uncanny Valley in virtual characters. In: International Journal of the Digital Human
Tinwell, A., Grimshaw, M. and Abdel Nabi, D. (2011). Effect of emotion and articulation of speech on the Uncanny Valley in virtual characters. In: Proceedings of the Affective Computing and Intelligent Interaction 2011 Conference, Memphis, TN: USA, pp. 557–566
Tinwell, A., Grimshaw, M. and Williams, A. (2011). The Uncanny Wall. In: International Journal of Arts and Technology, vol. 4, no. 3, pp. 326-341
Tinwell, A., Grimshaw, M., Williams, A. and Abdel Nabi, D. (2011). Facial expression of emotion and perception of the Uncanny Valley in virtual characters. In: Computers in Human Behavior, vol. 27, no. 2, pp. 741-749