Monday, July 26, 2010

MGUIDE Development Process

I thought it would be a good idea to try to explain the methodologies followed in the development of the MGUIDE prototypes. Having a focus mainly on the research outcomes, the development methodology followed was of little concern to the involved stakeholders. Trying to create interpersonal simulations like the ones found in real-life is a process mostly compatible with the a Scrum development methodology (shown below). I am planning to create a paper on the topic, and hence I will not say much in this post.

  800px-Scrum_process_svg

Source: Wikipedia

Gathering the requirements of the users can be done using a variety of ways. I followed a combine literature-user evaluation approach. One of my earliest prototypes was developed using guidelines found in the literature. The prototype was then evaluated with actual users and a set of new requirements was developed. These requirements are what the SCRUM refers to as the “product backlog”. Each spring (usually in my case 1-3 months) a set of the requirements were developed and tested, and then were replaced by a new set of requirements. Doing simulations of interpersonal scenarios gives you the freedom to augment the product backlog with new requirements quite easily. Using methods of research like direct observation and note taking, you can take notes on the interactions found in the scenarios that you want to simulate. My scenario was a guide agent and hence, I went to a number of tours where I made a number of interesting observations. Most of my findings were actually developed in the MGUIDE prototypes, but there are others that still remain in the “product backlog”. Of course these requirements and the work that was done in the MGUIDE is enough to inform Artificial intelligence models of behaviour in order to create completely automated systems.

This iterative process was then repeated prior the actual user research stage, where the full set-up of the MGUIDE evaluation stage was tested. I used a small group of people that tried to find bugs in the software, problems with the data gathering tools and others. The problems were normally corrected on site and the process was repeated again. Once I ensured that all my instruments were free of problems, the official evaluation stage of the prototypes started.

Closing this post, I must highlight the need for future research in gathering data about different situations where interpersonal scenarios occur. In reality different situations produce different reactions in people and this should be researched further. Only through detailed empirical experimentation we can ensure that future avatar-based systems will guarantee superior user experiences.

 

0 comments:

Post a Comment