This multidisciplinary project undertakes a program of research in natural language generation (NLG), the subfield of artificial intelligence that aims to construct intuitive, accessible utterances to communicate the data, knowledge and reasoning of computational systems. NLG capabilities have an important role in facilitating new, more natural interaction with computers, both in current applications such as mobile information access and in emerging ones such as personal assistants and human-robot interaction. NLG systems remain inflexible and difficult to build, however. This research aims to addresses this problem by developing techniques to train NLG systems to match human language use. The project is a close collaboration that links psychological experiments, designed to uncover the strategies human speakers use, to computational experiments, which apply these strategies in NLG systems using machine learning. The theoretical framework at the center of this project is Bayesian cognitive modeling, a probabilistic approach that explains human information processing in terms of decision making under uncertainty. Applied to language use, Bayesian cognitive modeling involves estimating the communicative goals speakers adopt, the knowledge and meanings available to speakers, and the choices speakers make to express needed information in suitable linguistic terms. Such knowledge and strategies can then be used to drive NLG systems. The specific research of the project investigates three key domains for applying NLG to construct messages to describe real-world situations: making lexical choices, constructing complex linguistic structures compositionally, and fulfilling multiple overlapping communicative goals. The project explores each domain through interrelated activities carried out by an interdisciplinary team of computer scientists and psychologists: to formalize speaker choices using a range of Bayesian cognitive models; to fit the models to visually-grounded language corpora using machine learning; to evaluate the empirical scope of goal-directed reasoning by comparing the learned models both to attested human choices and to baseline learned models; and to assess how well the models match human comprehension of linguistic meaning. The intellectual merits of the project lie in bridging the gap between traditional goal-directed rational models of human behavior and state-of-the-art computational methods that instantiate templates or reproduce likely patterns. In addition to the societal impacts of the technology, the broader impacts of the project include the construction of data resources, models and modeling tools that will be distributed to facilitate further research, and contributions to ongoing initiatives for education in cognitive science at Rutgers.
|Effective start/end date||9/1/15 → 8/31/18|
- National Science Foundation (National Science Foundation (NSF))