Interactive control of multi-agent motion in virtual environments
Item Status
Embargo End Date
Date
Authors
Abstract
With the increased use of crowd simulation in animation, specification of crowd
motion can be very time consuming, requiring a lot of user input. To alleviate this
cost, we wish to allow a user to interactively manipulate the many degrees of freedom
in a crowd, whilst accounting for the limitation of low-dimensional signals from
standard input devices. In this thesis we present two approaches for achieving this: 1)
Combining shape deformation methods with a multitouch input device, allowing a user
to control the motion of the crowd in dynamic environments, and 2) applying a data-driven
approach to learn the mapping between a crowd’s motion and the corresponding
user input to enable intuitive control of a crowd.
In our first approach, we represent the crowd as a deformable mesh, allowing a user
to manipulate it using a multitouch device. The user controls the shape and motion
of the crowd by altering the mesh, and the mesh in turn deforms according to the
environment. We handle congestion and perturbation by having agents dynamically
reassign their goals in the formation using a mass transport solver. Our method allows
control of a crowd in a single pass, improving on the time taken by previous, multistage,
approaches. We validate our method with a user study, comparing our control
algorithm against a common mouse-based controller. We develop a simplified version
of motion data patches to model character-environment interactions that are largely
ignored in previous crowd research. We design an environment-aware cost metric
for the mass transport solver that considers how these interactions affect a character’s
ability to track the user’s commands. Experimental results show that our system can
produce realistic crowd scenes with minimal, high-level, input signals from the user.
In our second approach, we propose that crowd simulation control algorithms inherently
impose restrictions on how user input affects the motion of the crowd. To
bypass this, we investigate a data-driven approach for creating a direct mapping between
low-dimensional user input and the resulting high-dimensional crowd motion.
Results show that the crowd motion can be inferred directly from variations in a user’s
input signals, providing a user with greater freedom to define the animation.
This item appears in the following Collection(s)

