Project Details
Description
The sensory modalities of sight, sound and touch are employed in combination to research a new dimension in human-machine communication. These expanded interface capabilities build on technologies established in related work, and are implemented at multiple user stations in a networked Distributed System for Collaborative Information Processing and Learning (DISCIPLE). The system provides object-oriented groupware running on Internet protocols as well as on an Asynchronous Transfer Mode intracampus network. Intelligent agents at each user location fuse the multimodal information inputs and decide actions to be taken in the collaborative environment. Established technologies for region-of-interest sensing, image understanding, face recognition and visual gesture are applied for interaction in the sight domain. Automatic speech and speaker recognition, speech synthesis, and distant-talking autodirective microphone arrays support the sound dimension. Gesture detection, position sensing, force feedback gloves and multitasking tactile software are employed for touch communication. The DISCIPLE environment of multiple collaborating users provides a comprehensive test bed for quantifying human interface designs, and for measuring the synergies that can be won from combinations of simultaneous multimodal communication. Results of this research will significantly broaden the utility of distributed networked computing for human users.
Status | Finished |
---|---|
Effective start/end date | 3/1/97 → 8/31/00 |
Funding
- National Science Foundation: $778,440.00