Virtual Human Interaction Lab – Stanford University

{ Projects }

Immersion at Scale

As virtual reality technology moves from laboratories to living rooms, the question, “how immersive is enough” has become uniquely important. For governments and corporations who seek to build systems, it is critical to know exactly how immersive these systems need to be. Inspired by an exhaustive meta-analysis on the qualities that make up an ideal virtual experience, the Immersion at Scale project seeks to explore the degree of immersion required for an ideal virtual experience through the use of mobile virtual reality systems.

This project intends to utilize the flexibility of mobile virtual reality “suitcase” systems by recruiting a large national sample. The question of immersion will be explored through three lenses: spatial, social, and learning, each of which will place participants in vastly different (and physiologically arousing) virtual environments. Hardware manipulations of interest include Field of View (FOV), Image Persistence, Update Rate, Latency, and Tracking Level. Data will be collected through optical and magnetic tracking of head and body movement, and physiological sensors will be used to record heart rate and skin conductance.

This video from Smart Planet explores some of the aspects of the hardware and software used at VHIL to create an immersive virtual experience. You can view the original Smart Planet post here.

Sustainable Behaviors

The lab leverages virtual reality technology to understand and influence environmental behaviors and attitudes. Immersive virtual environment technology places individuals in personalized and vivid environmental scenarios that would otherwise be impossible or dangerous in the physical world. For example, users embody virtual coral reefs to experience the impact of ocean acidification, or engage in virtual coal consumption to understand the amount of energy used during a shower. Our research suggests that immersive virtual reality may be more effective in long term pro-environmental outcomes than less immersive and interactive environments. Ongoing projects involve collaborations with the Woods Institute for the Environment at Stanford University to develop unique educational curriculum.

This video from SF Gate focuses on a recent research project that allows a participant to embody a piece of coral and learn about ocean acidification. To read the full article, which touches on other environmental projects at VHIL, see here.

Learning in Immersive VR

A virtual classroom gives researchers the freedom to conduct experiments with complete control over the actions and appearance of virtual teachers, classmates, and surroundings. In collaboration with researchers from the Graduate School of Education, we are investigating the interactions between class subject, learning environment, and classroom makeup on participants' interest and learning in a virtual class. Through the virtual world, we are also able to precisely monitor participants’ behavior in the classroom, and look for correlations between these behaviors and learning outcomes.

Prior work in the lab has shown that believing one has had a social interaction in a virtual environment can increase arousal, focus attention, and improve learning. Further, experiments from the non-virtual world have been replicated and expanded in virtual reality, including studying stereotype threat imposed by avatars and examining gestures used in solving math problems.

This video, courtesy of Stanford University, gives a synopsis of how body movements in virtual reality can facilitate learning. To view the video on Stanford’s Youtube channel, see here.

Homuncular Flexibility

In this line of studies, we are examining a concept first developed by Jaron Lanier called "homuncular flexibility" – the ability to learn to control novel avatars in interactive tasks. For example, in order to reach further in virtual space, can users learn to control avatar bodies with extra limbs? How does way these avatar bodies look, and the way they are controlled, affect task success, liking and the sense of presence? In collaboration with Jaron Lanier, we are examining the conditions in which people learn to remap degrees of freedom onto digital space, including the amount of time it takes to form a mental representation of the remapping, the ability to add limbs, and the nature of learning tasks that facilitate being able to use remapped information. Previous work with Robin Rosenberg examines the psychological consequences of flying. We are currently examining the effects of controlling multiple limbs, and asking the question of whether users can control multiple bodies at once (one-to-many).

This video gives an account of the Virtual Superhero study, which examined the psychological consequences of flying. For the full Stanford News Article, see here.


In this line of studies, we are examining the importance of nonverbal communication in interpersonal interaction, focusing on interpersonal synchrony in particular. In order to examine these interactions in a naturalistic environment (without using trackers) we are using multiple inexpensive video game sensors such as the Microsoft Kinect to track the movements of participants engaged in two-person tasks. Recent publications, featured in Stanford News include teaching and learning and collaborative creative ideation.

This video, courtesy of MediaX at Stanford, explores the lab’s work in tieing body language to creativity and learning. To read the full Stanford News article, see here.

Projects Archive »