Virtual Human Interaction Lab – Stanford University

{ Projects }

Using Avatars to Reduce Energy Use

Chopping down a virtual tree There are many reasons why Virtual Reality will reduce energy use, such as the much discussed proposition that virtual conferencing will reduce travel via airplanes and automobiles. However, we are taking a more active approach. A number of studies at VHIL are using avatars and virtual reality simulations to encourage people to reexamine their personal energy behavior. For example, VR can make the relationship between energy use and environmental consequences less abstract. One study showed that when subjects were forced to saw down virtual trees, they later (in the physical world) used less paper when cleaning up an accidental water spill, as depicted in this video. Similarly, we are using avatars to teach about climate change. The abstract nature of climate change-slow, gradual, and nonlinear consequences; gases that cannot be perceived; the lack of information regarding the impact of specific behaviors-can be uniquely portrayed by virtual simulations in which the invisible becomes visible (e.g., carbon molecules) where centuries can pass by quickly. In another line of studies funded by the Department of Energy, we are using social cognitive theory techniques to reduce the amount of heat and water people used during showers. The "experiential" aspects of Virtual Reality offer a unique tool to teach about the consequences of one's energy use. [publications]

Avatars and Behavioral Modeling

Avatar Creation Virtual reality enables us to create a powerful and persuasive stimulus: the virtual self. Using digital photographs, we can create avatars that have a striking resemblance to the self. We can then manipulate the virtual self in myriad ways that would be difficult or even impossible in the real world. The virtual self can modify its appearance or perform a behavior that the real self cannot, thus serving as a novel type of model. According to social cognitive theory, models can be valuable stimuli for encouraging the imitation of particular behaviors. Thus, we are investigating how using self-models and virtually manipulating social cognitive constructs such as identification, self-efficacy, and vicarious reinforcement can influence imitation, particularly in the context of health and consumer behaviors. Is seeing the virtual self engage in a healthful activity more or less effective than a virtual other? When an avatar shows positive benefits of using a product in the third person, does the consumer then go out and buy that product? Can behaviors be encouraged by seeing the virtual self model health-related rewards and punishments such as weight loss, weight gain? [publications]

Digital Footprints: What Your Virtual Actions Reveal About Your Physical Self

Digital Footprints Any time people use the Internet, they leave a digital record behind (think "cookies" on browsers). Similarly, but in much greater detail, any time people enter virtual reality, they leave a "digital footprint"—all the data the computer automatically collects. This can include: speech, nonverbal behavior, and location. Footprints can be used (and, in fact, are being used) by military, industry, educators and other organizations to detect who you are, what you are doing, and even what you plan on doing later. We are using a variety of tracking devices to predict identity and behavior, such as cameras that capture facial expressions, videogame devices such as the Kinect that can capture body gestures, and online virtual worlds such as Second Life that archive all of your actions. For example, in our Second Life study we demonstrated that footprints can be used to predict personality. In another 'Driving Project,' we demonstrated that facial geometry features, especially features involving the eyes and mouth, can be used effectively as predictors of poor driving behavior and can identify accidents two seconds before they occur. These machine learning classifiers could be incorporated in advanced driver warning systems for improved vehicle safety. In our 'Online Shopping Project,' we demonstrated that the face can predict buyer intent, opening up possibilities for commercial applications. In our 'Monitoring Operator Fatigue' study we demonstrated that facial movements can accurately predict operator errors, fatigue level, and learning rates during a repetitive motor task. In essence, while one can hide behind an avatar of a different name or appearance, the massive amount of data stored in the digital footprint still can reveal much information. Moreover, this data can be used to improve educational systems, commerce, and all forms of social interactions [publications]

The Proteus Effect

Mirror World Screenshot

Cyberspace grants us great control over our self-representations. At the click of a button, we can alter our avatars' gender, age, attractiveness, and skin tone. But as we choose our avatars online, do our avatars change us in turn? In a series of studies, we've explored how putting people in avatars of different age, race, gender, attractiveness or height change how they behave in a virtual environment and also in subsequent face to face interactions. [publications]

Transformed Social Interaction

Virtual Meeting Screenshot

In collaboration with the Research Center for Virtual Environments and Behavior, we are interested in the experience of social presence as well as task performance within collaborative virtual environments. We are utilizing virtual reality simulations in which people interact in real-time within a collaborative virtual environment. Specifically, we seek to: 1) learn more about the behaviors that occur during collaboration, and 2) explore the idea of transforming social interaction by selectively augmenting and decrementing these behaviors in order to provide the interactants with novel tools during interaction. In other words, by selectively rendering behaviors that were not actually performed, or alternatively by not rendering behaviors that were in fact performed, immersive virtual environments allow for conversational strategies that are not possible in face-to-face interactions or videoconferencing. We are examining the effect of implementing these novel strategies, and testing their influence on conversation in terms of task performance, learning, persuasion. See our wikipedia entry on TSI. [publications]

Avatar Identity

Bush/Kerry Morph Spectrum What are the implications of having an avatar, that is, a digital model that represents you in virtual reality? We are studying the ties that individuals have to an avatar. Specifically, how much does an avatar need to resemble (both visually and behaviorally) its respective owner in order for person-specific influences to take effect? When does this digital representation stop being George Bush and start being John Kerry?

Using a variety of affective, behavioral, and cognitive measures we are exploring the phenomenon of virtual self, and examining the implications of avatar representation. [publications]

Learn how Kerry could have won the election through facial identity capture.

Learning in Immersive VR

Virtual Classroom Screenshot

In collaboration with Berkeley's CITRUS lab we are exploring how immersive virtual reality extends the benefits of video learning, allowing the user to enter the same world as the teacher. First, immersive settings allow users to see in full three dimensions, greatly increasing detail, presence (i.e., learners feel psychologically as if they are in the digital learning environment, as opposed to the physical space) and social presence (i.e., they feel as if the digital reconstruction of the instructor is a real person). Second, as opposed to stationary video, immersive virtual settings allow users to control how they view the environment by allowing them to change aspects such as camera position and orientation, even allowing a disconnect between their own representation and their point of view in real-time. Third, video settings only allow users to watch the instructor; immersive virtual reality allows the user to interact with the instructor and the environment, as well as to perform novel functions such as sharing body space with the instructor during learning. In the first experiment completed using this paradigm, we demonstrated, via subjective self-report of the learners as well as more objective measures involving expert coder ratings of learners performing the tai chi moves later on in physical space, that people learned more in the immersive virtual reality system than in the 2D video system. [publications]

Avatars in Second Life

Second Life Screenshot

Despite the incredible popularity of the online virtual world Second Life, there is really no empirical data measuring a) what exactly to people do inside of SL, and b) what are the effects of interacting via avatars over time. Our longitudinal (8 week), large sample size (80 participants) study examined the influence of avatar appearance on virtual and offline behavior. At the start of the study, experimenters gave each participant an assigned avatar shape (tall, short, overweight, or opposite gender), L$1000, and a scripted object which would track online behavior such as chat content, animation use, and locations visited. Factors such as major, programming experience, and gender were split as evenly as possible between the four conditions. For six weeks, participants spent a minimum of six hours actively participating in Second Life activities. At the conclusion of each week, participants completed a web-based questionnaire which gathered information about real world activities and attitudes as well as reactions to the past week's Second Life experiences. Our data shed light not only on the behaviors which occur in SL, but also the effect those behaviors have on the users' "first life". Moreover, by tracking changes to their avatars' appearance, we can begin to answer the question of how avatar choice affects psychology and behavior. Researchers interested in accessing our large data set should click here. [publications]

Haptic Communication in Social Interaction

Use of Haptic Devices

We are exploring the use of networked digital touch in collaborative virtual environments. Specifically, we are examining how often people touch one another using standard 6DOF and force-feedback devices, how haptic patterns of interactants correlate with other behaviors, attitudes, and personality attributes, and the effect that virtual person-to-person touch has on copresence, trust, and relationship formation. [publications]


Real Interacting with Virtual

The construct of presence has often been used as a metric to evaluate the utility of a virtual environment. While there is no consensus on an exact definition, the general notion concerns the degree to which the user actually feels as if they are present in the virtual environment (as opposed to present in the physical world). Moreover, related concepts are social presence, the degree to which people feel connected to other people in the virtual world, and self presence, the degree to which people believe their own avatar is actually them. Despite broad research on the topic of presence, reliable measures are still lacking, and much debate as to how to quantify the construct exists. Our research in these areas focuses on developing behavioral measures of these constructs (as opposed to self report measures), and on determining the relationship between how real a virtual world or avatar looks and/or behaves and the subjective experience of presence. Overall, findings indicate that behavioral measures are more reliable than self report ones, and that increasing realism can sometimes be counterproductive and result in less subjective presence. [publications]

Homuncular Flexibility

Homuncular Flexibility Diagram

In this line of studies, we are examining a concept first developed by Jaron Lanier called "homuncular flexibility" – learning to remap physical degrees of freedom onto digital representations in interactive tasks. The crucial question we are addressing with homuncular flexibility is: Can people learn to remap degrees of freedom that are not essential to a task in order to control novel digital actions which are releveant to a task? For example, if the task were using a hand to paint a wall, could a person learn use one physical hand to control multiple virtual hands – splitting degrees of freedom from the arm to control the XYZ Position of five hands at once? In collaboration with Jaron Lanier and Stanford's LIFE lab, we are examining the conditions in which people learn to remap degrees of freedom onto digital space, including the amount of time it takes to form a mental representation of the remapping, the ability to dual task both hands at the same time, and the nature of learning tasks that facilitate being able to use remapped information. Currently we are examining a question raised long ago by Icarus--can humans fly? With Robin Rosenberg, we are examining the psychological consequences of flying. [publications]

Diversity Simulation

Hotel Mirror Screenshot

Using immersive virtual reality, it is possible for someone to literally experience the world as another person. In other words, someone can become a passenger to someone else in an immersive simulation that is designed to demonstrate what it is like to walk a mile in the shoes of another. We are using these simulations to explore relationships of gender, status, and race, and testing the "extended contact" hypothesis, namely that wearing the face of another in a simulation designed to highlight diversity issues can increase awareness. (see video) [publications]