Steerbench: A benchmark suite for evaluating steering behaviors

Shawn Singh, Mubbasir Kapadia, Petros Faloutsos, Glenn Reinman

Research output: Contribution to journalArticlepeer-review

67 Scopus citations

Abstract

Steering is a challenging task, required by nearly all agents in virtual worlds. There is a large and growing number of approaches for steering, and it is becoming increasingly important to ask a fundamental question: how can we objectively compare steering algorithms? To our knowledge, there is no standard way of evaluating or comparing the quality of steering solutions. This paper presents SteerBench: a benchmark framework for objectively evaluating steering behaviors for virtual agents. We propose a diverse set of test cases, metrics of evaluation, and a scoring method that can be used to compare different steering algorithms. Our framework can be easily customized by a user to evaluate specific behaviors and new test cases. We demonstrate our benchmark process on two example steering algorithms, showing the insight gained from our metrics. We hope that this framework can grow into a standard for steering evaluation.

Original languageEnglish (US)
Pages (from-to)533-548
Number of pages16
JournalComputer Animation and Virtual Worlds
Volume20
Issue number5-6
DOIs
StatePublished - Sep 2009
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Keywords

  • Benchmarking
  • Crowd behaviors
  • Pedestrians
  • Steering

Cite this