Mutual gaze and task performance in shared virtual environments
Non-verbal behaviour, particularly gaze direction, plays a crucial function in regulating conversations and providing critical social information. In the current set of studies, we represented interactants in a shared immersive virtual environment. Interactants sat in physically remote rooms, entered a common virtual room and played games of 20 questions. The interactants were represented by one of three types of avatars: (1) human forms with head movements rendered in real time; (2) human forms without head movements rendered; or (3) human voice only (i.e., a conference call). The data demonstrated that interactants in the rendered head movement condition rated a higher level of co-presence, liked each other more, looked at each other's heads more, and spoke for a lower percentage of time during the game, compared to the other two conditions. We discuss implications for the design of shared virtual environments, the study of non-verbal behaviour and the goal of facilitating efficient task performance.