This is the first part of the two videos for Prototype 1. The guide’s task is to provide navigation instructions on predetermined routes, as well as personalised information for specific locations in the castle of Monemvasia.
When the system loads the user is given a choice between three information scenarios: Architecture, History and Biographical. The user can also customize the appearance of the agent and other system settings, but this doesn’t show on the video. During a presentation the agent can utilize information from the Face-Detection module, and react if for example, the user is standing too far away from the camera. Finally, notice the use of FSM (Finite State Machine) in the construction of the dialogues. With the proper authoring tool such dialogues are extremely easy to make and can cover a whole range of dialogue phenomena.
Another idea I experimented for a while was the use of emotional responses as a method to guide how a presentation evolves about a location. For example if the user is too bored of the provided information, the agent can try to either provide alternative information or speed up the pace. However, its impossible to implement such approach in the existing Script-based systems. Possibly a KB is needed to dynamically create the contents of each presentation, but how the agent augments the story with non-verbal behaviour is an open question.
0 comments:
Post a Comment