Abstract
We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that context-dependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.
Original language | English (US) |
---|---|
Pages | 363-369 |
Number of pages | 7 |
State | Published - 1998 |
Externally published | Yes |
Event | Proceedings of the 1998 IEEE 6th International Conference on Computer Vision - Bombay, India Duration: Jan 4 1998 → Jan 7 1998 |
Other
Other | Proceedings of the 1998 IEEE 6th International Conference on Computer Vision |
---|---|
City | Bombay, India |
Period | 1/4/98 → 1/7/98 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition