This is a demo of the original version of prototype 4. The system was initially constructed with the aim of comparing a natural method of communication with the guide vs. a menu of predefined phrases. This system also uses speech recognition (not shown in this demo) with dynamic grammars. However, because of the time needed to evaluate the system per participant (more than two hours) we decided to drop it and replace it with a simpler system (shown in screenshot 1).
This demo doesn’t use AIML but rather access the VPF service (http://vpf.cise.ufl.edu/VirtualPeopleFactory/) through an API provided by its creator Brent Rossen. VPF uses a similar approach to AIML, but the patterns don’t have to be said verbatim to match. This means that if the system has a trigger (i.e., a question) that is even remotely similar to what you said, it will match it and return it associated speech (i.e., answer). This is one step further to language understanding without any linguistic processing, but of course it has its limitations. I will discuss these limitations in another post.
Screenshot 1: The simpler system that replaced Prototype 4.
0 comments:
Post a Comment