Tuesday, August 17, 2010

Cognitive Walkthrough - ICT Virtual Human Toolkit

As part of the MGUIDE project, I had to complete the cognitive walkthrough of the ICT Virtual Human Toolkit. This toolkit is a collection of the state-of-the-art-technologies including: speech recognition, automatic gesture generation, text to speech synthesis, 3D interfaces, dialogue model creation to name but a few. Current users of the toolkit include, CSI/UCB Vision Group at UC Berkeley, Component Analysis Lab at Carnegie Mellon University, Affective Computing Research group at MIT Media Lab and Microsoft Research.

File:Virtual humans characters.jpg

An assemble of some of the characters created by the toolkit.

Source: University of Southern California Institute for Creative Technologies

The main idea behind the evaluation, was to provide usability insights on what is perhaps the most advanced platform for multimodal creation on the planet today. The process was completed successfully with 2 experts, and revealed a number of insights that were documented carefully. These insights will be fed to the design of the Talos Toolkit – among the MGUIDE deliverables was an authoring toolkit to aid the rapid prototyping of multimodal applications with virtual humans. Talos is currently just an architecture (see here), but the walkthrough of the ICT toolkit provided some valuable insights that should guide its actual design. However the MGUIDE project was completed, with the development of Talos set for the future goals of the project.

I applied the cognitive walkthrough, exactly as I would applied it in any other project. I performed a task analysis first (i.e., i established the tasks I wanted to perform with the toolkit I broke them into actions) and then, I asked the following questions at each step:

1) Will the customer realistically be trying to do this action?

2) Is the control for the action visible?

3) Is there a strong link between the control and the action?

4) Is feedback appropriate?   

0 comments:

Post a Comment