Publications

Publications

The One-Man-Crowd: Single User Generation of Crowd Motions Using Virtual Reality

Tairan, Yin/Inria, Univ Rennes, CNRS, IRISA, France; Ludovic Hoyet/Inria, Univ Rennes, CNRS, IRISA, France; Marc Christie/ Inria, Univ Rennes, CNRS, IRISA, France; Marie-Paule Cani/Ecole Polytechnique, CNRS (LIX), IP Paris, France; Julien Pettré/Inria, Univ Rennes, CNRS, IRISA, France

IEEE Transactions on Visualization and Computer Graphics/28/5/2245-2255

Read More

Ubiq: A System to Build Flexible Social Virtual Reality Experiences

Sebastian Friston, Ben Congdon, David Swapp, Lisa Izzouzi ,Klara Brandstätter ,Daniel Archer, Otto Olkkonen, Felix J. Thiel, Anthony Steed

The 27th ACM Symposium on Virtual Reality Software and Technology/VRST 2021

Read More

Virtual Dance Museum: the Case of Greek/Cypriot Folk Dancing

Aristidou, Andreas
Andreou, Nefeli
Charalambous, Loukas
Yiannakidis, Anastasios
Chrysanthou, Yiorgos

EUROGRAPHICS Workshop on Graphics and Cultural Heritage / EG GCH / 4-6 November 2021 / Bournemouth University, United Kingdom / https://diglib.eg.org/handle/10.2312/2633101 / Virtual Museums

Read More

EFFECTIVENESS OF SOCIAL VIRTUAL REALITY

Lisa Izzouzi,
Anthony Steed

2021 ACM CHI Virtual Conference on Human Factors in Computing Systems / CHI 2021

Read More

Spatio-temporal priors in 3D human motion

When we practice a movement, human brains creates a
motor memory of it. These memories are formed and stored
in the brain as representations which allows us to perform

familiar tasks faster than new movements. From a devel-
opmental robotics and embodied artificial agent perspective

it could be also beneficial to exploit the concept of these
motor representations in the form of spatial-temporal motion
priors for complex, full-body motion synthesis. Encoding such
priors in neural networks in a form of inductive biases inherit
essential spatio-temporality aspect of human motion. In our
current work we examine and compare recent approaches for
capturing spatial and temporal dependencies with machine
learning algorithms that are used to model human motion.

Anna Deichler, Kiran Chhatre, Jonas Beskow, Christopher Peters

ICDL StEPP workshop 22nd Aug 2021

Read More

A Survey on Reinforcement Learning Methods in Character Animation

Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.

Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, Karen Liu, Julien Pettré, Michiel van de Panne, Marie-Paule Cani

Computer Graphics Forum, Eurographics STAR

Read More

EMOCA: Emotion Driven Monocular Face Capture and Animation

Radek Danecek, Michael J. Black, Timo Bolkart

Conference on Computer Vision and Pattern Recognition (CVPR)

Read More

ICON: Implicit Clothed humans Obtained from Normals

Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black

Conference on Computer Vision and Pattern Recognition (CVPR)

Read More

Real-Time Locomotion on Soft Grounds With Dynamic Footprints

When we move on snow, sand, or mud, the ground deforms under our feet, immediately affecting our gait. We propose a physically based model for computing such interactions in real time, from only the kinematic motion of a virtual character. The force applied by each foot on the ground during contact is estimated from the weight of the character, its current balance, the foot speed at the time of contact, and the nature of the ground. We rely on a standard stress-strain relationship to compute the dynamic deformation of the soil under this force, where the amount of compression and lateral displacement of material are, respectively, parameterized by the soil’s Young modulus and Poisson ratio. The resulting footprint is efficiently applied to the terrain through procedural deformations of refined terrain patches, while the addition of a simple controller on top of a kinematic character enables capturing the effect of ground deformation on the character’s gait. As our results show, the resulting footprints greatly improve visual realism, while ground compression results in consistent changes in the character’s motion. Readily applicable to any locomotion gait and soft soil material, our real-time model is ideal for enhancing the visual realism of outdoor scenes in video games and virtual reality applications.

Eduardo Alvarado, Chloé Paliard, Damien Rohmer, Marie-Paule Cani

Frontiers in Virtual Reality

Read More

Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds

When walking on loose terrains, possibly covered with vegetation, the ground and grass should deform, but the character’s gait should also change accordingly. We propose a method for modeling such two-ways interactions in real-time.

Chloé Paliard, Eduardo Alvarado, Damien Rohmer, Marie-Paule Cani

Eurographics 2021