Virtual Human Interaction Lab – Stanford University

{ Projects }

Empathy at Scale

The following project seeks to design, test, and distribute virtual reality interventions that teach empathy. Virtual reality simulations allow learners to experience the life of someone else by “walking a mile” in his or her shoes. Through the capabilities of the technology, learners can see their appearance and behaviors reflected in a virtual mirror as someone who is different from them, and perceptually experience a scenario from the perspective of any party in a social interaction. Previous studies, including our own work using virtual reality to teach empathy toward those with disabilities, with different skin color, and from different age groups have demonstrated varying effectiveness of virtual reality in teaching empathy, but those studies suffered from three shortcomings:

  1. Small and homogeneous samples, typically upper-class college students near the age of twenty, limiting researchers’ abilities to draw conclusions across different cultures and communities;
  2. Not longitudinal, most studies don’t follow subjects over time, so they can’t determine lasting effects of treatment; and
  3. Limited range of empathy scenarios, so they cannot isolate the barriers that may preclude motivations to emphasize.
  4. This project will collect data from a large, demographically diverse sample—approximately 1000 participants—to test a wide range of empathy scenarios varying in domain (e.g., prejudice, bullying, classroom learning, etc.), and in motivational factors that encourage empathy (e.g., immersiveness of the simulation, emotional valence of treatment, strength of group affiliation). If successful, this project will also examine the effects of multiple virtual reality treatment sessions over six months.

    Our academic partner in this work is Dr. Jamil Zaki, founding director of Stanford’s Social Neuroscience Laboratory. This project is funded by the Robert Wood Johnson Foundation. Moreover, we are partnering with Sesame Workshop and others to extend our testing and distribute our empathy simulations.

    PBS Newshour features our empathy research.

    Sustainable Behaviors

    The lab leverages virtual reality technology to understand and influence environmental behaviors and attitudes. Immersive virtual environment technology places individuals in personalized and vivid environmental scenarios that would otherwise be impossible or dangerous in the physical world. For example, users embody virtual coral reefs to experience the impact of ocean acidification, or engage in virtual coal consumption to understand the amount of energy used during a shower. Our research suggests that immersive virtual reality may be more effective in long term pro-environmental outcomes than less immersive and interactive environments. Ongoing projects involve collaborations with the Woods Institute for the Environment at Stanford University to develop unique educational curriculum.

    This video from SF Gate focuses on a recent research project that allows a participant to embody a piece of coral and learn about ocean acidification. To read the full article, which touches on other environmental projects at VHIL, see here.

    Immersion and Presence

    As virtual reality technology moves from laboratories to living rooms, the question, “how immersive is enough” has become uniquely important. For governments and corporations who seek to build systems, it is critical to know exactly how immersive these systems need to be. Inspired by an exhaustive meta-analysis on the qualities that make up an ideal virtual experience, the Immersion at Scale project seeks to explore the degree of immersion required for an ideal virtual experience through the use of mobile virtual reality systems.

    This project intends to utilize the flexibility of mobile virtual reality “suitcase” systems by recruiting a large national sample. The question of immersion will be explored through three lenses: spatial, social, and learning, each of which will place participants in vastly different (and physiologically arousing) virtual environments. Hardware manipulations of interest include Field of View (FOV), Image Persistence, Update Rate, Latency, and Tracking Level. Data will be collected through optical and magnetic tracking of head and body movement, and physiological sensors will be used to record heart rate and skin conductance.

    This video from Smart Planet explores some of the aspects of the hardware and software used at VHIL to create an immersive virtual experience. You can view the original Smart Planet post here.

    Learning in Immersive VR

    A virtual classroom gives researchers the freedom to conduct experiments with complete control over the actions and appearance of virtual teachers, classmates, and surroundings. In collaboration with researchers from the Graduate School of Education, we are investigating the interactions between class subject, learning environment, and classroom makeup on participants' interest and learning in a virtual class. Through the virtual world, we are also able to precisely monitor participants’ behavior in the classroom, and look for correlations between these behaviors and learning outcomes.

    Prior work in the lab has shown that believing one has had a social interaction in a virtual environment can increase arousal, focus attention, and improve learning. Further, experiments from the non-virtual world have been replicated and expanded in virtual reality, including studying stereotype threat imposed by avatars and examining gestures used in solving math problems.

    This video, courtesy of Stanford University, gives a synopsis of how body movements in virtual reality can facilitate learning. To view the video on Stanford’s Youtube channel, see here.

    Homuncular Flexibility

    In this line of studies, we are examining a concept first developed by Jaron Lanier called "homuncular flexibility" – the ability to learn to control novel avatars in interactive tasks. For example, in order to reach further in virtual space, can users learn to control avatar bodies with extra limbs? How does way these avatar bodies look, and the way they are controlled, affect task success, liking and the sense of presence? In collaboration with Jaron Lanier, we are examining the conditions in which people learn to remap degrees of freedom onto digital space, including the amount of time it takes to form a mental representation of the remapping, the ability to add limbs, and the nature of learning tasks that facilitate being able to use remapped information. Previous work with Robin Rosenberg examines the psychological consequences of flying. We are currently examining the effects of controlling multiple limbs, and asking the question of whether users can control multiple bodies at once (one-to-many).

    This video gives an account of the Virtual Superhero study, which examined the psychological consequences of flying. For the full Stanford News Article, see here.


    In this line of studies, we are examining the importance of nonverbal communication in interpersonal interaction, focusing on interpersonal synchrony in particular. In order to examine these interactions in a naturalistic environment (without using trackers) we are using multiple inexpensive video game sensors such as the Microsoft Kinect to track the movements of participants engaged in two-person tasks. Recent publications, featured in Stanford News include teaching and learning and collaborative creative ideation.

    This video, courtesy of MediaX at Stanford, explores the lab’s work in tieing body language to creativity and learning. To read the full Stanford News article, see here.

    Projects Archive »