Abstract
Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench: a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation.
Original language | English (US) |
---|---|
Pages (from-to) | 533-548 |
Number of pages | 16 |
Journal | Computer Animation and Virtual Worlds |
Volume | 20 |
Issue number | 5-6 |
DOIs | |
State | Published - Sep 2009 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design
Keywords
- Benchmarking
- Crowd behaviors
- Pedestrians
- Steering