Our research on task-oriented dialogue has focused on conversational mentors. Such interactive systems attempt to aid a human user who is carrying out some complex goal-directed task. To this end, the systems draw on knowledge about the task's elements and structure, as well as knowledge about the conventions of dialogue. One common application of such mentors is help desks, which often use a form of conversational case-based reasoning (Aha et al., 2006). Another example is Allen et al.'s (2004) TRIPS system, which helped users respond to emergency calls.
Conversational mentors must address a number of issues. Behavior is goal-directed and involves communicative actions about joint activity. Participants must reach common ground for the situation that includes each other's beliefs and goals. However, many beliefs and goals are not explicitly stated but rather inferred by participating agents The overall process alternates between drawing inferences and executing goal-directed activities. This suggests building on Langley's (2017) social cognition hypothesis: intelligence depends on the ability to represent and reason about the mental states of other agents. Human communication requires both understanding and altering others' beliefs and goals, which in turn requires high-level representations and mechanisms to reason over them.
We have developed an architecture for conversational mentors that integrates generic knowledge about dialogues and specific knowledge about a domain to both understand utterances and respond to them. Domain knowledge includes conceptual rules that describe situational patterns and skills that associate conditions, effects with subskills or actions. Both concepts and skills are organized into hierarchies, with complex structures defined in terms of simpler ones. Dialogue expertise includes knowledge about primitive speech acts and relations among them (e.g., questions are followed by answers). The architecture operates in cycles, during which it: observes new speech acts (including its own), uses inference to update beliefs/goals in working memory, and executes skills to produce utterances based on this memory state.
Our main testbed has involved scenarios in which a human medic helps injured teammates with system assistance. The medic has limited training but can provide situational information and affect the environment. On the other hand, the system has medical expertise, but cannot sense or alter the environment directly; it can only offer instructions. The medic and system collaborate to achieve their shared goal of helping the injured person. The system uses a Web interface similar to a messaging application, which let us address high-level aspects of dialogue without tackling other important challenges, such as speech recognition.
This work was funded by the Office of Naval Research through Grants N00014-09-1-1029 and N00014-10-1-0487. Alfredo Gabaldon, Ben Meadows, and Richard Heald contributed substantially to the effort. A gift from Ericsson provided additional support for research done jointly with Ted Selker.
|© 1997 Institute for the Study of Learning and Expertise. All rights reserved.|