Virtual Human Interaction Lab – Stanford University

{ Projects }

Empathy at Scale

The following project seeks to design, test, and distribute virtual reality interventions that teach empathy. Virtual reality simulations allow learners to experience the life of someone else by “walking a mile” in his or her shoes. Through the capabilities of the technology, learners can see their appearance and behaviors reflected in a virtual mirror as someone who is different from them, and perceptually experience a scenario from the perspective of any party in a social interaction. Previous studies, including our own work using virtual reality to teach empathy toward those with disabilities, with different skin color, and from different age groups have demonstrated varying effectiveness of virtual reality in teaching empathy, but those studies suffered from three shortcomings:

  1. Small and homogeneous samples, typically upper-class college students near the age of twenty, limiting researchers’ abilities to draw conclusions across different cultures and communities;
  2. Not longitudinal, most studies don’t follow subjects over time, so they can’t determine lasting effects of treatment; and
  3. Limited range of empathy scenarios, so they cannot isolate the barriers that may preclude motivations to emphasize.
  4. This project will collect data from a large, demographically diverse sample—approximately 1000 participants—to test a wide range of empathy scenarios varying in domain (e.g., prejudice, bullying, classroom learning, etc.), and in motivational factors that encourage empathy (e.g., immersiveness of the simulation, emotional valence of treatment, strength of group affiliation). If successful, this project will also examine the effects of multiple virtual reality treatment sessions over six months.

    Our academic partner in this work is Dr. Jamil Zaki, founding director of Stanford’s Social Neuroscience Laboratory. This project is funded by the Robert Wood Johnson Foundation. Moreover, we are partnering with Sesame Workshop and others to extend our testing and distribute our empathy simulations.

    PBS Newshour features our empathy research.

    Sustainable Behaviors

    Extreme weather events are now dramatizing the effect humans are having on the planet. Yet we still face great challenges in staving off irrevocable climate change. It isn’t simply about convincing skeptical politicians — it’s about getting the public to visualize how their behaviors (like driving a gas-guzzling car or living in an energy inefficient home) are contributing to a problem that may only manifest itself completely in future decades. Our previous research has shown that Virtual Reality is uniquely effective at changing conservation behavior, as evidenced in studies about reducing paper use and about hot water conservation.

    We are currently pursuing two projects that will utilize the affordances of virtual reality to teach people about the effects of climate change in marine environments:

    - Ocean Acidification: Very few people have firsthand experience diving among reefs that are teeming with coral fish life and thus, most of us have no exposure to the animals that will eventually disappear if our behavior doesn’t change. And even those who do can’t see the degradation in real time. Most people have either never heard of ocean acidification--the process by which the ocean becomes more acidic as it soaks up the carbon dioxide we release into the atmosphere--or wrongfully assume it is another term for acid rain. In our experiments, learners experience multiple phases of the process that results in ocean acidification. The first phase personifies the process of ocean acidification as a result of burning fossil fuel, in which learners follow CO2 molecules as they are released into the atmosphere and absorbed by the surface water of the ocean. Subsequent phases involve the learner embodying or interacting with difference ocean species and witnessing the changes in those species’ ecosystems as a result of increased CO2 presence. These simulations will guided by our marine science collaborators — Kristy Kroeker and Fio Michelli, and will be formatted to accommodate lessons for various age groups. Our collaborator Roy Pea will guide the learning science portion of the design, testing, and outreach for the simulations. We will use mobile VR equipment to collect data from a large and demographically diverse sample outside of the laboratory context. This project is sponsored by the Gordon and Betty Moore Foundation. The project is prefaced by the lab’s previous research on ocean acidifcation. The video below from SF Gate focuses on the project, which allowed a participant to embody a piece of coral and learn about ocean acidification. To read the full article, which touches on other environmental projects at VHIL, see here.

    - Fish Avatars: This project will transfer movement data from electronically tagged fish in the kelp forests of Monterey Bay into virtual reality where humans can enter the underwater realm to observe virtual versions of live fish. Our collaborators from the Goldbogen lab will be constructing and implementing the underwater tracking sensors, and we will be building the systems to display fish avatars. We are studying how the experience of seeing “real” fish avatars in virtual reality differs from recorded or simulated agents of those fish, from a psychological standpoint. Our previous research has demonstrated that avatars facilitate more physiological arousal and learning than agents. The end goal of this project is to let anyone on the planet “adopt a fish” with a head-mounted display in VR. This project is sponsored by the Stanford Woods Institute for the Environment.

    SF Gate covers environmental projects at VHIL. Full article here.

    Immersion and Presence

    As virtual reality technology moves from laboratories to living rooms, the question, “how immersive is enough” has become uniquely important. For governments and corporations who seek to build systems, it is critical to know exactly how immersive these systems need to be. Inspired by an exhaustive meta-analysis on the qualities that make up an ideal virtual experience, the Immersion at Scale project seeks to explore the degree of immersion required for an ideal virtual experience through the use of mobile virtual reality systems.

    This project intends to utilize the flexibility of mobile virtual reality “suitcase” systems by recruiting a large national sample. The question of immersion will be explored through three lenses: spatial, social, and learning, each of which will place participants in vastly different (and physiologically arousing) virtual environments. Hardware manipulations of interest include Field of View (FOV), Image Persistence, Update Rate, Latency, and Tracking Level. Data will be collected through optical and magnetic tracking of head and body movement, and physiological sensors will be used to record heart rate and skin conductance.

    This video from Smart Planet explores some of the aspects of the hardware and software used at VHIL to create an immersive virtual experience. You can view the original Smart Planet post here.

    Learning in Immersive VR

    A virtual classroom gives researchers the freedom to conduct experiments with complete control over the actions and appearance of virtual teachers, classmates, and surroundings. In collaboration with researchers from the Graduate School of Education, we are investigating the interactions between class subject, learning environment, and classroom makeup on participants' interest and learning in a virtual class. Through the virtual world, we are also able to precisely monitor participants’ behavior in the classroom, and look for correlations between these behaviors and learning outcomes.

    Prior work in the lab has shown that believing one has had a social interaction in a virtual environment can increase arousal, focus attention, and improve learning. Further, experiments from the non-virtual world have been replicated and expanded in virtual reality, including studying stereotype threat imposed by avatars and examining gestures used in solving math problems.

    This video, courtesy of Stanford University, gives a synopsis of how body movements in virtual reality can facilitate learning. To view the video on Stanford’s Youtube channel, see here.

    Homuncular Flexibility

    In this line of studies, we are examining a concept first developed by Jaron Lanier called "homuncular flexibility" – the ability to learn to control novel avatars in interactive tasks. For example, in order to reach further in virtual space, can users learn to control avatar bodies with extra limbs? How does way these avatar bodies look, and the way they are controlled, affect task success, liking and the sense of presence? In collaboration with Jaron Lanier, we are examining the conditions in which people learn to remap degrees of freedom onto digital space, including the amount of time it takes to form a mental representation of the remapping, the ability to add limbs, and the nature of learning tasks that facilitate being able to use remapped information. Previous work with Robin Rosenberg examines the psychological consequences of flying. We are currently examining the effects of controlling multiple limbs, and asking the question of whether users can control multiple bodies at once (one-to-many).

    This video gives an account of the Virtual Superhero study, which examined the psychological consequences of flying. For the full Stanford News Article, see here.


    In this line of studies, we are examining the importance of nonverbal communication in interpersonal interaction, focusing on interpersonal synchrony in particular. In order to examine these interactions in a naturalistic environment (without using trackers) we are using multiple inexpensive video game sensors such as the Microsoft Kinect to track the movements of participants engaged in two-person tasks. Recent publications, featured in Stanford News include teaching and learning and collaborative creative ideation.

    This video, courtesy of MediaX at Stanford, explores the lab’s work in tieing body language to creativity and learning. To read the full Stanford News article, see here.

    Projects Archive »