top of page



Virtual characters for realistic scenarios

We are very excited to announce that our article dedicated to the CLIPE project has been just published in the Winter issue of EU Research. The article presents and overview of the project, its objectives, main activities concluded and mini-interviews with two ESRs, Nefeli Andreou and Rafael Blanco, who talked about their projects on Motion Caption Data and Crowd Simulation, and their plans for the future.

Chrysanthou, Y., Pelechano, N., Andreou, N., Blanco, R

Ubiq-exp: A toolkit to build and run remote and distributed mixed reality experiments

Developing mixed-reality (MR) experiments is a challenge as there is a wide variety of functionality to support. This challenge is exacerbated if the MR experiment is multi-user or if the experiment needs to be run out of the lab. We present Ubiq-Exp - a set of tools that provide a variety of functionality to facilitate distributed and remote MR experiments. We motivate our design and tools from recent practice in the field and a desire to build experiments that are easier to reproduce. Key features are the ability to support supervised and unsupervised experiments, and a variety of tools for the experimenter to facilitate operation and documentation of the experimental sessions. We illustrate the potential of the tools through three small-scale pilot experiments. Our tools and pilot experiments are released under a permissive open-source license to enable developers to appropriate and develop them further for their own needs.

For the full article:

Anthony Steed et al.

Integrating Rocketbox Avatars with the Ubiq Social VR platform

Having a truly ethical, unbiased technology, requires people developing and using this technology to have an equal opportunity to participate in its creation. In this sense, open-access tools are a way to share best practices and enhance collaboration. In this paper, we will present the integration of the Microsoft Rocketbox avatar library into the Unity networking library Ubiq. We will see how they may contribute to the research in the field of populated virtual environments.

For the full article:

Lisa Izzouzi & Anthony Steed

A new framework for the evaluation of locomotive motion datasets through motion matching techniques

Analyzing motion data is a critical step when building meaningful locomotive motion datasets. This can be done by labeling motion capture data and inspecting it, through a planned motion capture
session or by carefully selecting locomotion clips from a public dataset. These analyses, however, have no clear definition of coverage, making it harder to diagnose when something goes wrong, such as a virtual character not being able to perform an action or not moving at a given speed. This issue is compounded by the large amount of information present in motion capture data, which poses a challenge when trying to interpret it. This work provides a visualization and an optimization method to streamline the process
of crafting locomotive motion datasets. It provides a more grounded approach towards locomotive motion analysis by calculating different quality metrics, such as: demarcating coverage in terms of both linear and angular speeds, frame use frequency in each animation clip, deviation from the planned path, number of transitions, number of used vs. unused animations and transition cost. By using these metrics as a comparison mean for different motion datasets, our approach is able to provide a less subjective alternative to the modification and analysis of motion datasets, while improving interpretability.

For the full article:

Vicenzo Abichequer Sangalli, Ludovic Hoyet, Marc Christie, Julien Pettré

Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments

Real-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a flexible model for upper-body interactions, based on the anticipation of the character’s surroundings, and on antagonistic controllers to adapt the amount of muscular stiffness and response time to better deal with obstacles. Our solution relies on a hybrid method for character animation that couples a keyframe sequence with kinematic constraints and lightweight physics. The dynamic response of the
character’s upper-limbs leverages antagonistic controllers, allowing us to tune tension/relaxation in the upper-body without diverging from the reference keyframe motion. A new sight model, controlled by procedural rules, enables high-level authoring of the way the character generates interactions by adapting its stiffness and reaction time. As results show, our real-time method offers precise and explicit control over the character’s behavior and style, while seamlessly adapting to new situations. Our model is therefore well suited for gaming applications.

Eduardo Alvarado, Damien Rohmer, Marie-Paule Cani

Pose Representations for Deep Skeletal Animation

Data-driven skeletal animation relies on the existence of a suitable learning scheme, which can capture the rich context of motion. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion, suitable for deep skeletal animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously
encode rotational and positional orientation, enabling a rich encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion
elements to be missed. Qualitative results demonstrate the usefulness of the parameterization in skeleton-specific synthesis.

N. Andreou, A. Aristidou and Y. Chrysanthou

CCP: Configurable Crowd Profiles

Abstract: Diversity among agents’ behaviors and heterogeneity in virtual crowds in general, is an important aspect of crowd simulation as it
is crucial to the perceived realism and plausibility of the resulting simulations. Heterogeneous crowds constitute the pillar in creating numerous real-life scenarios such as museum exhibitions, which require variety in agent behaviors, from basic collision avoidance to more complex interactions both among agents and with environ[1]mental features. Most of the existing systems optimize for specific behaviors such as goal seeking, and neglect to take into account other behaviors and how these interact together to form diverse agent profiles. In this paper, we present a RL-based framework for learning multiple agent behaviors concurrently. We optimize the agent policy by varying the importance of the selected behaviors (goal seeking, collision avoidance, interaction with environment, and grouping) while training; essentially we have a reward function that changes dynamically during training. The importance of each
separate sub-behavior is added as input to the policy, resulting in the development of a single model capable of capturing as well as enabling dynamic run-time manipulation of agent profiles; thus allowing configurable profiles. Through a series of experiments, we
verify that our system provides users with the ability to design virtual scenes; control and mix agent behaviors thus creating personality profiles, and assign different profiles to groups of agents.

Moreover, we demonstrate that interestingly the proposed model generalizes to situations not seen in the training data such as a) crowds with higher density, b) behavior weights that are outside the training intervals and c) to scenes with more intricate environment layouts. Code, data and trained policies for this paper are at

Andreas Panayiotou, Theodoros Kyriakou, Marilena Lemonari, Yiorgos Chrysanthou, and Panayiotis Charalambous

The One-Man-Crowd: Single User Generation of Crowd Motions Using Virtual Reality

Tairan, Yin/Inria, Univ Rennes, CNRS, IRISA, France; Ludovic Hoyet/Inria, Univ Rennes, CNRS, IRISA, France; Marc Christie/ Inria, Univ Rennes, CNRS, IRISA, France; Marie-Paule Cani/Ecole Polytechnique, CNRS (LIX), IP Paris, France; Julien Pettré/Inria, Univ Rennes, CNRS, IRISA, France

Ubiq: A System to Build Flexible Social Virtual Reality Experiences

While they have long been a subject of academic study, social virtual reality (SVR) systems are now attracting increasingly large audiences on current consumer virtual reality systems. The design space of SVR systems is very large, and relatively little is known about how these systems should be constructed in order to be usable and efficient. In this paper we present Ubiq, a toolkit that focuses on facilitating the construction of SVR systems. We argue for the design strategy of Ubiq and its scope. Ubiq is built on the Unity platform. It provides core functionality of many SVR systems such as connection management, voice, avatars, etc. However, its design remains easy to extend. We demonstrate examples built on Ubiq and how it has been successfully used in classroom teaching. Ubiq is open source (Apache License) and thus enables several use cases that commercial systems cannot.

Sebastian Friston, Ben Congdon, David Swapp, Lisa Izzouzi ,Klara Brandstätter ,Daniel Archer, Otto Olkkonen, Felix J. Thiel, Anthony Steed

Virtual Dance Museum: the Case of Greek/Cypriot Folk Dancing

Aristidou, Andreas
Andreou, Nefeli
Charalambous, Loukas
Yiannakidis, Anastasios
Chrysanthou, Yiorgos


A lot of work in social virtual reality, including our own group’s, has focused on effectiveness of specific social behaviours such as eye-gaze, turn taking, gestures and other verbal and non-verbal cues. We have built upon these to look at emergent phenomena such as co-presence, leadership and trust. These give us good information about the usability issues of specific social VR systems, but they don’t give us much information about the requirements for such systems going forward. In this short paper we discuss how we are broadening the scope of our work on social systems, to move out of the laboratory to more ecologically valid situations and to study groups using social VR for longer periods of time.

Lisa Izzouzi,
Anthony Steed

Spatio-temporal priors in 3D human motion

When we practice a movement, human brains creates a
motor memory of it. These memories are formed and stored
in the brain as representations which allows us to perform

familiar tasks faster than new movements. From a devel-
opmental robotics and embodied artificial agent perspective

it could be also beneficial to exploit the concept of these
motor representations in the form of spatial-temporal motion
priors for complex, full-body motion synthesis. Encoding such
priors in neural networks in a form of inductive biases inherit
essential spatio-temporality aspect of human motion. In our
current work we examine and compare recent approaches for
capturing spatial and temporal dependencies with machine
learning algorithms that are used to model human motion.

Anna Deichler, Kiran Chhatre, Jonas Beskow, Christopher Peters

bottom of page