Monday, August 30, 2010

iPAD – Multitouch Web (Part B)

 Source: Some blog

 Research

There is already a usability study of the IPad. from the Nielsen Norman group here. A summary of the study can be found here. As Neilsen admits the study is preliminary, but the resulting usability insights serve as a good foundation for the design of the myriads of applications that will follow the release of the device on the global market.

The study is very generic –  it tested several applications and web sites running on the iPad device. As every digital project requires a unique testing context that takes into account its unique range of parameters, more focused studies are necessary. Again existing research methods and techniques must be tailored accordingly to take into consideration the multi-touch style of interaction.

Take as an example, m-commerce, which according to many is the next “big” thing in the mobile world. In my opinion, multi-touch web pages if done correctly, hold a high potential to make our transactions easier than even our desktop computers. How would you design an m-commerce web site in order to achieve such a goal? The guidelines discussed in Part A of this post, are a good place to start in order to construct some initial prototypes. However, as these guidelines are far from best-practises, an iterative cycle of participatory design workshops for a few days is necessary in order to agree on a final prototype.

Gathering user insights after the web site is released, currently seems challenging. The usability studies I’ve read so far, use lab-based testing with one-to-one sessions. However testing with real-users in lab-conditions is always expensive. Cheaper techniques, like remote usability testing currently seem very hard to implement. Then, do existing tools for split or multivariate testing (e.g., Google Web site Optimizer) work on a multi-touch interface? These tools are optimized for a mouse-based environment, and I am not convinced that they can be used effectively on multi-touch. For instance, does Google Web Site optimizer registers multi-touch gestures, as well as clicks when it comes to measuring the success of a web site? Nevertheless, it will be very interesting to see how these techniques for research will be adapted, to serve the new environment in the years to come. 

Enhanced by Zemanta

Sunday, August 29, 2010

iPAD – Multitouch Web (Part A)

iPad con dock y teclado inalámbrico

Image via Wikipedia

I finally had the chance the test the new IPad device. I spent some time trying to figure out if it really worth spending 400 pounds on this device. Here are my findings:

1) The device is excellent for gaming. It is perhaps one of the best gaming devices I’ve ever used. The integrated gyroscope means, that are no annoying arrow keys to use while playing games (all you have to do is to turn the device)

2) For i-reading, although the screen is very clear the absence of integrated back stand (like Samsung’s UMPC devices) make it very hard to hold it for a very long time.

3) From all the applications I tested I only found one of particular interest. It is an application that shows you the star constellations based on your geographical position.

4) The device has to flash support, which means that the majority of the WWW content is out of reach. Advocates of the device say that as the Web is progressively moving to the HTML 5.0 standard, that will soon stop be an issue. Advocates of Flash say that Flash can not die, as it is an integral part of the web. Who is right, and who is wrong only time will tell. For now all I can say is that if I buy the IPad, my favourite episodes of “Eureka” on Hulu are out of reach for good.

5) The device is multi-touch, which means that it is very hard to operate on mouse-based web pages. As the majority of mobile platforms are now moving towards multi-touch,  what does this mean for designers, IA, researchers and other stakeholders? Below I attempt to present some of the possible implications for designers, IA and researchers.

Design

  • Size of Finger

Web pages on touch-sensitive devices are not navigated using a mouse. They are controlled with human fingers, many of which are much fatter than a typical mouse pointer. No matter what apple says about an “ultimate browsing experience” on IPad, clicking on small text links with your finger is painful, and sometimes practically impossible. As touch-sensitive devices become more popular, this could mean the end of traditional text-links and their replacement by big touchable buttons.

  • Secondary Functions

The “fat finger” problem discussed above, and the limited screen estate, also mean that we can not cram thousand of features (or ads) in a tight frame as we would in a desktop web page. The design of web pages should be focus on the essential elements, and it should avoid wasting user attention on processing secondary functions.  

  • Hover effects and Right Clicks

Without a mouse-based interface, you can’t use any mouse over effects. Elements that we are so used to interact with on mouse-driven interfaces, like menus that pop-up when you hover you mouse over a link, or right clicks, do not exist. Apple has a number of replacements in place, like holding your finger down on a link to get a pop-up menu, but they only make clicking itself more complex. 

  • Horizontal Vs. Vertical Styling

Due to the ability to easily switch between vertical and horizontal orientations, web sites will have to automatically adapt their styling to look accordingly in both orientations. Seamless presentation in both landscape and portrait mode is one of the most fundamental guidelines when it come to designing for the IPad/IPhone devices.

  • 3D Objects

Apple is trying to push designs that immediate tangible things – real world interfaces that are easy to understand and familiar in their use. If you create a magazine application make it look like a real magazine, or if you make a word processor make it look like a type writer. Could iPad be a significant milestone towards a more three-dimensional WWW? Some web 3D applications like Second Life, could certainly benefit from the mouse-less interface, as touching and tilting make it much easier to interact with 3D worlds than mousing and keyboarding. In mainstream websites, 3D elements (e.g., material surfaces and SFX)  will probably be used widely as an “invitation to touch”, but never as a basic metaphor.    

Information Architecture

Multi-touch presents a unique set of challenges for information architects. The limited screen size and the size of the human finger tip, means that  limited number of actions needed to complete one task (its tiresome to swipe and touch too often) pushes the IA to create a dead simple architecture with minimal number of actions. Information aggregation, will play a very important role into creating architectures that minimize input and maximize output.

Under the above rule, several more “human-like” modalities of communication, such as speech recognition, text to speech processing, emotion recognition, natural language processing etc, are likely to find their place into the multi-touch web. Multi-touch seems to me the perfect vehicle towards the right direction, away from the dominance of GUI interfaces and towards a more natural way of interacting with computers.    

Continues in the next post

Enhanced by Zemanta

Tuesday, August 17, 2010

Cognitive Walkthrough - ICT Virtual Human Toolkit

As part of the MGUIDE project, I had to complete the cognitive walkthrough of the ICT Virtual Human Toolkit. This toolkit is a collection of the state-of-the-art-technologies including: speech recognition, automatic gesture generation, text to speech synthesis, 3D interfaces, dialogue model creation to name but a few. Current users of the toolkit include, CSI/UCB Vision Group at UC Berkeley, Component Analysis Lab at Carnegie Mellon University, Affective Computing Research group at MIT Media Lab and Microsoft Research.

File:Virtual humans characters.jpg

An assemble of some of the characters created by the toolkit.

Source: University of Southern California Institute for Creative Technologies

The main idea behind the evaluation, was to provide usability insights on what is perhaps the most advanced platform for multimodal creation on the planet today. The process was completed successfully with 2 experts, and revealed a number of insights that were documented carefully. These insights will be fed to the design of the Talos Toolkit – among the MGUIDE deliverables was an authoring toolkit to aid the rapid prototyping of multimodal applications with virtual humans. Talos is currently just an architecture (see here), but the walkthrough of the ICT toolkit provided some valuable insights that should guide its actual design. However the MGUIDE project was completed, with the development of Talos set for the future goals of the project.

I applied the cognitive walkthrough, exactly as I would applied it in any other project. I performed a task analysis first (i.e., i established the tasks I wanted to perform with the toolkit I broke them into actions) and then, I asked the following questions at each step:

1) Will the customer realistically be trying to do this action?

2) Is the control for the action visible?

3) Is there a strong link between the control and the action?

4) Is feedback appropriate?   

Monday, August 9, 2010

Ultimate – A prototype search engine

Lately, I have been experimenting again with Axure on a prototype search engine called Ultimate. The engine is based on actual user requirements collected through a focus group study. I decided to prototype Ultimate, in order to perfect my skills in Axure. The tool enabled me to construct a high-fidelity and fully functional prototype within a few hours. Some of the features of the engine are:

  • It relies on full natural language processing to understand the user’s input.
  • The search algorithm is based on a complex network of software agents – automated software robots programmed to complete tasks (e.g., monitor the prices of 200 airlines, get ratings from tripadvisor.co.uk, etc)  - to deliver accurate and user-tailored results.
  • Some of the engine functionalities are discussed in the user journey shown below. The full functionalities are well documented, but for obvious reasons I can not discuss them in this post.

Screenshots:

 
User Journey:
 
 
 Research:

Run a usability testing of the above design against a more “conventional” search engine (e.g., Skyscanner ), and I am certain that the results will show the clear superiority of Ultimate. Of course, there is the need for a careful usability study in order to compare the designs but I am certain that Ultimate is superior in all usability metrics.

 

  

Friday, August 6, 2010

Bespoke Research Solutions

The research methodologies discussed in previous posts provide an excellent basis to start from, but the web is changing rapidly. Rich Internet Applications (Flash, Silverlight, Ajax, etc), new interaction methods (e.g., multi-touch, gesture recognition, etc) will flood the web are already here. Can we apply what we know in terms of user- research in these new environments? Consider as an example a corporate web site, a mobile artificial intelligent assistant, and an RIA application. Conducting a usability study in the first scenario, is perhaps straightforward. However what techniques and measures are the most relevant to the second and third scenarios? What can we apply in order to ensure that we can indeed gather rich user insights? What are the most relevant tools to deploy? - Siri is a mobile application, while the rest are desktop applications. Unfortunately I don’t have the answers, as I have never attempted a similar study before. I can only imagine that existing techniques would have to be tailored to the unique and complex range of variables. Therefore, research adaptation is far more important that the domain itself.

Present:

halcrow_home_tcr siri-iphone-app 1
Source: HalCrow Source: Some Blog Source: Microsoft

Future (Aurora) ??:

I like thinking about the future, a lot! Aurora from Mozila labs, is a project that aims to redesign the way we browse the web. Currently, It is merely a concept, but it gives a pretty good idea of how the future will be like. There is an excellent critique of the Aurora project here. If the future will be similar to Aurora (i.e., reinventing the user interface), what does this mean for user-research? We need to fundamentally re-think the way that we conduct user research. New techniques will have to be invented and existing ones revisited. Innovation and creativity, will distinguish the companies that survive from the ones that will go out of business.

Aurora (Part 1) from Adaptive Path on Vimeo.

Thursday, August 5, 2010

User Research Deliverables

Different institutes require user research deliverables to be formatted differently. In MGUIDE I had to provide both within the allocated budget and timeframes:

Industry: Usability reports with actionable recommendations were delivered to each of the companies involved in the MGUIDE project. Each company wanted user-insights on the particular piece of technology that contributed to the project. For example, the following recommendation was of particular interest to the text-to-speech company:

Actionable Recommendations:

  • Provide a visible and easy to use method for users to decrease/increase the rate of the Text-to-Speech output while the application speaks.

-Users will likely find the output more natural and easy to understand

Note: I can not release any detailed research findings as they are the property of the institutes that supported the project.

Personas:  Personas are a technique used to summarize user-research findings. In my understanding, personas are made-up people used to represent major segments of a product’s target audience. There is an excellent explanation of personas here. Personas are easy to construct, and a great way to distil research findings into a simple and accessible form.

Source: WebCredible

Academia: In academia statistical significance is of major importance. I presented deliverables in similar format to those above, but accompanied by statistics in the proper format (e.g., F (1, 14) = 7.956; p < 0.05). Statistics appear to be of little interest to the industry though.

Transferable Research Skills (Part B)

Remote Usability Testing:

Remote testing, is about conducting usability testing without having participants come into the lab. Although there are several tools and web services on the market, I prefer to work with userfeel because of the low cost, and their massive network of testers from all over the globe.

A/B and Multivariate Testing:

A/B and multivariate testing, is about testing different versions of the same design, in order to see which performs the best. I use this technique in all of my usability tests, either by differentiating my designs across one variable (i.e., A/B testing) or more (i.e., multivariate testing).

Co-Discovering Learning:

My approach to co-discovering learning, is as follows: I usually ask two or more users to perform a task together, while I observe them. I encourage them to converse and interact with each other to create a “team spirit”. In some cases, I also allow note taking (e.g., when the content is technical/complex). The technique can yield some really powerful results, as it is more natural for users to verbalise their thought during the test.

Participatory Design:

image

Participatory design, is about involving users into the design and decision-making process into an iterative cycle of designing and evaluation. I usually conduct a short participatory design session, prior all of my usability evaluations. In these sessions, the usability issue of a prototype system are determined, and the changes to accommodate for these issues are made. The refined system is then used in the actual usability evaluation.


A4.0 Inspection Methods

Cognitive Walkthrough:

Cognitive Walkthrough

The cognitive walkthrough is a method of “quick and dirty” usability testing requiring a number of expert evaluators. A list of tasks and the actions to complete them is created. The evaluators step  through each task, action by action, noting down problems and difficulties as they go. I can use cognitive walkthroughs on a number of digital interfaces, ranging from web sites to complex authoring toolkits.

Heuristic Evaluation:

Heuristic Evaluation

Heuristic evaluation is about judging the compliance of an interface against a number of recognized usability principles (i.e., the Heuristics). I used this method extensively in the evaluation of e-learning prototypes during my teaching at Middlesex University.


A5.0 Advanced Usability Techniques (in training)

Eye Tracking:

heatmap_lightbox

Eye tracking is a technique that pinpoints where the users look on a system and for how long. I am currently talking with Middlesex University in order to get training on using eye-tracking as a usability testing technique. We plan to conduct a series of eye-tracking session in Middlesex state-of-the-art usability labs, using the MGUIDE prototypes.

Emotion Recognition & Eye Tracking

This is a technique I developed during the MGUIDE project. I discuss it in detail here. It was developed with avatar-based interfaces/presentation systems in mind, but it is universal in nature. It is based on the hypothesis that the perceived accessibility of a system’s content is evident in the user's emotional expressions. The combined “Emotion Recognition and Eye-tracking” technique will be validated in a lab-based study that will be performed at Middlesex University.


A5.0 Audits

Accessibility Audit:

In accessibility audit, an expert checks the compliance of a web site with established guidelines and metrics. The  W3C WAI are the most widely used guidelines in accessibility audits. My approach for accessibility evaluation is framework-based (see here), but a) I haven’t applied my framework with disabled users and b) the W3C WAI heuristics are very well established. Although I have a good knowledge of the W3C WAI heuristics, I have never performed an accessibility audit before.

 

Enhanced by Zemanta

Wednesday, August 4, 2010

Transferable Research Skills (Part A)

MGUIDE is my most up to date research work, and I am very proud of what I have accomplished. However, I’ve become eager to outgrow the domain and transfer my research skills to the digital media world. I am interested in any form of digital interactive applications (websites, social networks, interactive-tv, search engines, games, etc). I am highly experienced in using the following techniques for user research: 

A: User Research

A1.0 Quantitative Research

Surveys/Questionnaires (Online and Offline):

Post-test and pre-test questionnaires provide real insights into user needs, wants and thoughts. I use powerful statistics (e.g., Cronbach's Alpha) to ensure that the questionnaires I create, are both reliable and valid. I can apply these skills into any domain with minimum adaptation time.

Performance Measures:

Performance measures, like for example, time to complete a task, the number of errors conducted, scores in retention tests, etc,  provide strong indication of how easily people can achieve tasks with a system. If this data are correlated with other objective or subjective measures they can provide deeper user insights than surveys/questionnaires alone.

Log File Analysis:

A log is a file that lists actions that have occurred. Both quantitative and qualitative data can be stored in a log file for later analysis. I use device/system logs to automatically collect data such as time to complete a task, items selected on the interface, keys pressed etc.


A2.0 Qualitative Research:

Focus Groups:

I mainly use focus groups for requirements gathering, either through the introduction of new ideas and discussion and/or the evaluation of low-fidelity prototypes. 

Direct Observation:

One of the most common techniques for collecting data in an ethnographic study is direct, first-hand observation of participants. I am experienced in using direct observation for note taking in both indoor and outdoor environments.I find gaining an understanding of users through first-hand observation of their behaviour while they use a digital system, genuinely exciting. During my work in MGUIDE direct observation was used to uncovered a number of interesting user-insights that were then correlated with user views collected from the questionnaires.

User Interviews & Contextual inquiry:

Other common ethnographic techniques are user interviews and contextual Inquiry. I use extensively open-ended interviews  i.e., interviews where the interviewees are all asked the same-open ended questions, in both field and lab conditions. I like the particular style as it is faster and can be more easily analysed and correlated with other data.    

Think Aloud Protocol:

Think-aloud is a technique for gathering data during a usability testing session. It involves participants thinking aloud as they are performing a set of specified tasks. I have used think-aloud very successfully in navigation tasks, where participants had to verbalise their answers to navigation problems as those presented by two interactive systems.


A3.0 Quantitative & Qualitative Research

Usability and Accessibility testing:

usabilitylabs

Lab-based and Field-based testing are the most effective ways of revealing usability and accessibility issues. I am experienced in conducting and managing lab and field testing. I use scenario-based quantitative and qualitative methods for my research.

Continues in the next post

Monday, August 2, 2010

Universality of Research Methods & Techniques

I thought that the universality of methods for research was a fundamental fact of modern science. Isn’t it obvious that having successfully applied quantitative/qualitative research in one domain means that your skills can be applied to any other domain with minimal adaptation time? Is there a real difference between applying qualitative research in a complex avatar-system like MGUIDE and an e-commerce web site? For example, If you apply techniques like unstructured interviews wouldn’t you follow the same principles to design the interviews in both domains?

Or even using more complex techniques like eye tracking  and emotion recognition, aren’t these domain-independent? Consider for instance, my combined emotion recognition + face detection technique for accessibility research, described in the previous post. The technique was developed with avatar-based interfaces/presentation systems in mind. Adapting the technique to different domains is a matter of defining the aspects of the interface you wish to research. The quantitative data that you will collect are the same (emotion intensities, etc), the qualitative of course will differ because the interfaces are different. In general once you establish the objectives/goals of the research, deciding which  techniques you will use (and modifying them if necessary  to suit your needs) is easy and the process is domain-independent.

2516939380_79f2e5dcf6 eyetracking-study-heat-map

Eye tracking used in completely different contexts: a) a 3D avatar-based world and b) a web page.

I am not sure why some people insist otherwise and focus so much on the subject matter. I have to agree that having expertise in a certain area, means that you can produce results fairly quickly. However this is a process easily learnt. Is domain expertise the most important quality a user researcher should have? Or should he perhaps have a solid research skills-set to start from and the willingness to learn more about established techniques and explore new ones?