There are a number of quantitative techniques that can be used in the user research of avatar-based interfaces. Apart from the “usual” techniques for gathering subjective impressions (through questionnaires, tests, etc) and performance data, I also considered a more objective technique based on emotion recognition. In particular, I thought of evaluating the accessibility of the content presented by my systems through the use of emotion expression recognition. The main hypothesis is that the perceived accessibility of the systems' content is evident in the user's emotional expressions.
If you think about it for a while, the human face is the strongest indicator of our cognitive state and hence, how we perceive a stimuli (information content, image, etc). Emotion measures (both quantitative and qualitative) can provide data that can augment any traditional technique for accessibility evaluation (e.g., questionnaires, retention tests, etc). For example, with careful logging you can see which part of your content is more confusing, which part requires the users to think more intensively, etc. In addition to the qualitative data, the numeric intensities can be used for some very interesting statistical comparisons. Manual coding of the video streams is no longer necessary, as there are a number of tools that allow automating analysis of face expressions. To my knowledge the following tools are currently fully functional:
2) SHORE
The idea is fully developed, and I am planning the release of the paper very soon. Finally, If we combine this technique with eye-tracking we can reveal even more user-insights about avatar-based interfaces. We could try for instance to identify, what aspect of the interface make the user to have the particular face expression (positive or negative). For example, one of the participants in my experiments mentioned that she couldn’t pay attention to the information provided by the system, because she was looking at the guide’s hair waving. To such a stimuli humans usually have a calm expression. This comment is just an indication of the user-insights that can be revealed, if these techniques are successfully combined.
0 comments:
Post a Comment