Publications

Publications

CCP: Configurable Crowd Profiles

Abstract: Diversity among agents’ behaviors and heterogeneity in virtual crowds in general, is an important aspect of crowd simulation as it
is crucial to the perceived realism and plausibility of the resulting simulations. Heterogeneous crowds constitute the pillar in creating numerous real-life scenarios such as museum exhibitions, which require variety in agent behaviors, from basic collision avoidance to more complex interactions both among agents and with environ[1]mental features. Most of the existing systems optimize for specific behaviors such as goal seeking, and neglect to take into account other behaviors and how these interact together to form diverse agent profiles. In this paper, we present a RL-based framework for learning multiple agent behaviors concurrently. We optimize the agent policy by varying the importance of the selected behaviors (goal seeking, collision avoidance, interaction with environment, and grouping) while training; essentially we have a reward function that changes dynamically during training. The importance of each
separate sub-behavior is added as input to the policy, resulting in the development of a single model capable of capturing as well as enabling dynamic run-time manipulation of agent profiles; thus allowing configurable profiles. Through a series of experiments, we
verify that our system provides users with the ability to design virtual scenes; control and mix agent behaviors thus creating personality profiles, and assign different profiles to groups of agents.

Moreover, we demonstrate that interestingly the proposed model generalizes to situations not seen in the training data such as a) crowds with higher density, b) behavior weights that are outside the training intervals and c) to scenes with more intricate environment layouts. Code, data and trained policies for this paper are at
https://github.com/veupnea/CCP.

Andreas Panayiotou, Theodoros Kyriakou, Marilena Lemonari, Yiorgos Chrysanthou, and Panayiotis Charalambous

CCP: Configurable Crowd Profiles

The One-Man-Crowd: Single User Generation of Crowd Motions Using Virtual Reality

Tairan, Yin/Inria, Univ Rennes, CNRS, IRISA, France; Ludovic Hoyet/Inria, Univ Rennes, CNRS, IRISA, France; Marc Christie/ Inria, Univ Rennes, CNRS, IRISA, France; Marie-Paule Cani/Ecole Polytechnique, CNRS (LIX), IP Paris, France; Julien Pettré/Inria, Univ Rennes, CNRS, IRISA, France

IEEE Transactions on Visualization and Computer Graphics/28/5/2245-2255

Ubiq: A System to Build Flexible Social Virtual Reality Experiences

While they have long been a subject of academic study, social virtual reality (SVR) systems are now attracting increasingly large audiences on current consumer virtual reality systems. The design space of SVR systems is very large, and relatively little is known about how these systems should be constructed in order to be usable and efficient. In this paper we present Ubiq, a toolkit that focuses on facilitating the construction of SVR systems. We argue for the design strategy of Ubiq and its scope. Ubiq is built on the Unity platform. It provides core functionality of many SVR systems such as connection management, voice, avatars, etc. However, its design remains easy to extend. We demonstrate examples built on Ubiq and how it has been successfully used in classroom teaching. Ubiq is open source (Apache License) and thus enables several use cases that commercial systems cannot.

Sebastian Friston, Ben Congdon, David Swapp, Lisa Izzouzi ,Klara Brandstätter ,Daniel Archer, Otto Olkkonen, Felix J. Thiel, Anthony Steed

The 27th ACM Symposium on Virtual Reality Software and Technology/VRST 2021

Virtual Dance Museum: the Case of Greek/Cypriot Folk Dancing

Aristidou, Andreas
Andreou, Nefeli
Charalambous, Loukas
Yiannakidis, Anastasios
Chrysanthou, Yiorgos

EUROGRAPHICS Workshop on Graphics and Cultural Heritage / EG GCH / 4-6 November 2021 / Bournemouth University, United Kingdom / https://diglib.eg.org/handle/10.2312/2633101 / Virtual Museums

EFFECTIVENESS OF SOCIAL VIRTUAL REALITY

A lot of work in social virtual reality, including our own group’s, has focused on effectiveness of specific social behaviours such as eye-gaze, turn taking, gestures and other verbal and non-verbal cues. We have built upon these to look at emergent phenomena such as co-presence, leadership and trust. These give us good information about the usability issues of specific social VR systems, but they don’t give us much information about the requirements for such systems going forward. In this short paper we discuss how we are broadening the scope of our work on social systems, to move out of the laboratory to more ecologically valid situations and to study groups using social VR for longer periods of time.

Lisa Izzouzi,
Anthony Steed

2021 ACM CHI Virtual Conference on Human Factors in Computing Systems / CHI 2021

Spatio-temporal priors in 3D human motion

When we practice a movement, human brains creates a
motor memory of it. These memories are formed and stored
in the brain as representations which allows us to perform

familiar tasks faster than new movements. From a devel-
opmental robotics and embodied artificial agent perspective

it could be also beneficial to exploit the concept of these
motor representations in the form of spatial-temporal motion
priors for complex, full-body motion synthesis. Encoding such
priors in neural networks in a form of inductive biases inherit
essential spatio-temporality aspect of human motion. In our
current work we examine and compare recent approaches for
capturing spatial and temporal dependencies with machine
learning algorithms that are used to model human motion.

Anna Deichler, Kiran Chhatre, Jonas Beskow, Christopher Peters

ICDL StEPP workshop 22nd Aug 2021

A Survey on Reinforcement Learning Methods in Character Animation

Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.

Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, Karen Liu, Julien Pettré, Michiel van de Panne, Marie-Paule Cani

Computer Graphics Forum, Eurographics STAR

EMOCA: Emotion Driven Monocular Face Capture and Animation

Radek Danecek, Michael J. Black, Timo Bolkart

Conference on Computer Vision and Pattern Recognition (CVPR)

ICON: Implicit Clothed humans Obtained from Normals

Current methods for learning realistic and animatable 3D clothed avatars need either posed 3D scans or 2D images with carefully controlled user poses. In contrast, our goal is to learn the avatar from only 2D images of people in unconstrained poses. Given a set of images, our method estimates a detailed 3D surface from each image and then combines these into an animatable avatar. Implicit functions are well suited to the first task, as they can capture details like hair or clothes. Current methods, however, are not robust to varied human poses and often produce 3D surfaces with broken or disembodied limbs, missing details, or non-human shapes. The problem is that these methods use global feature encoders that are sensitive to global pose. To address this, we propose ICON ("Implicit Clothed humans Obtained from Normals"), which uses local features, instead. ICON has two main modules, both of which exploit the SMPL(-X) body model. First, ICON infers detailed clothed-human normals (front/back) conditioned on the SMPL(-X) normals. Second, a visibility-aware implicit surface regressor produces an iso-surface of a human occupancy field. Importantly, at inference time, a feedback loop alternates between refining the SMPL(-X) mesh using the inferred clothed normals and then refining the normals. Given multiple reconstructed frames of a subject in varied poses, we use SCANimate to produce an animatable avatar from them. Evaluation on the AGORA and CAPE datasets shows that ICON outperforms the state of the art in reconstruction, even with heavily limited training data. Additionally, it is much more robust to out-of-distribution samples, e.g., in-the-wild poses/images and out-of-frame cropping. ICON takes a step towards robust 3D clothed human reconstruction from in-the-wild images. This enables creating avatars directly from video with personalized and natural pose-dependent cloth deformation.

Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black

Conference on Computer Vision and Pattern Recognition (CVPR)

Real-Time Locomotion on Soft Grounds With Dynamic Footprints

When we move on snow, sand, or mud, the ground deforms under our feet, immediately affecting our gait. We propose a physically based model for computing such interactions in real time, from only the kinematic motion of a virtual character. The force applied by each foot on the ground during contact is estimated from the weight of the character, its current balance, the foot speed at the time of contact, and the nature of the ground. We rely on a standard stress-strain relationship to compute the dynamic deformation of the soil under this force, where the amount of compression and lateral displacement of material are, respectively, parameterized by the soil’s Young modulus and Poisson ratio. The resulting footprint is efficiently applied to the terrain through procedural deformations of refined terrain patches, while the addition of a simple controller on top of a kinematic character enables capturing the effect of ground deformation on the character’s gait. As our results show, the resulting footprints greatly improve visual realism, while ground compression results in consistent changes in the character’s motion. Readily applicable to any locomotion gait and soft soil material, our real-time model is ideal for enhancing the visual realism of outdoor scenes in video games and virtual reality applications.

Eduardo Alvarado, Chloé Paliard, Damien Rohmer, Marie-Paule Cani

Frontiers in Virtual Reality

Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds

When walking on loose terrains, possibly covered with vegetation, the ground and grass should deform, but the character’s gait should also change accordingly. We propose a method for modeling such two-ways interactions in real-time.

Chloé Paliard, Eduardo Alvarado, Damien Rohmer, Marie-Paule Cani

Eurographics 2021