tag:blogger.com,1999:blog-16766101300595752982024-03-05T16:46:34.684-08:00Giannis DoumanisUser Experience Researcher
London | 07941942145giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.comBlogger51125tag:blogger.com,1999:blog-1676610130059575298.post-35680172395669573892010-08-30T12:44:00.001-07:002010-08-30T15:34:57.644-07:00iPAD – Multitouch Web (Part B)<div style="margin: 1em; width: 310px; display: block; float: right" class="zemanta-img" sizset="0" sizcache="16"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://www.ipadnewstracker.com/wp-content/uploads/2010/04/ebay_ipad_app.jpg" width="260" height="307"> <font size="3" sizset="0" sizcache="15">Source: <a href="http://www.ipadnewstracker.com/2010/04/ebay-for-ipad-transforms-online-shopping-experience/">Some blog</a></font></div> <p align="justify"><strong></strong><font size="3"> <strong>Research</strong></font></p> <p align="justify" sizset="1" sizcache="15"><font size="3">There is already a usability study of the IPad. from the Nielsen Norman group </font><a href="http://www.nngroup.com/reports/mobile/ipad/"><font size="3">here</font></a><font size="3" sizset="2" sizcache="15">. A summary of the study can be found <a href="http://www.useit.com/alertbox/ipad.html">here</a>. As Neilsen admits the study is <strong>preliminary</strong>, but the resulting usability insights serve as a good foundation for the design of the myriads of applications that will follow the release of the device on the global market. </font></p> <p align="justify"><font size="3">The study is very generic – it tested several applications and web sites running on the iPad device. As every digital project requires a unique testing context that takes into account its unique range of parameters, more<strong> focused studies</strong> are necessary. Again existing research methods and techniques must be<strong> tailored accordingly</strong> to take into consideration the multi-touch style of interaction. </font></p> <p align="justify"><font size="3">Take as an example, m-commerce, which according to many is the next “big” thing in the mobile world. In my opinion, multi-touch web pages if done correctly, hold a high potential to make our transactions easier than even our desktop computers. How would you design an m-commerce web site in order to achieve such a goal? </font><font size="3">The guidelines discussed in Part A of this post, are a good place to start in order to construct some initial prototypes. However, as these guidelines are far from best-practises, an iterative cycle of <strong>participatory design workshops </strong>for a few days is necessary in order to agree on a final prototype. </font></p> <p align="justify"><font size="3">Gathering user insights after the web site is released, currently seems <em>challenging</em>. </font><font size="3">The usability studies I’ve read so far, use lab-based testing with one-to-one sessions. However testing with real-users in lab-conditions is always expensive. Cheaper techniques, like <strong>remote usability</strong> testing currently seem very hard to implement. Then, do existing tools for split or multivariate testing (e.g., Google Web site Optimizer) work on a multi-touch interface? These tools are optimized for a mouse-based environment, and I am not convinced that they can be used effectively on multi-touch. For instance, does Google Web Site optimizer registers multi-touch gestures, as well as clicks when it comes to measuring the success of a web site? Nevertheless, it will be very interesting to see how these techniques for research will be adapted, to serve the new environment in the years to come.</font> </p> <div style="margin-top: 10px; height: 15px" class="zemanta-pixie"><a class="zemanta-pixie-a" title="Enhanced by Zemanta" href="http://www.zemanta.com/"><img style="border-bottom-style: none; border-right-style: none; border-top-style: none; float: right; border-left-style: none" class="zemanta-pixie-img" alt="Enhanced by Zemanta" src="http://img.zemanta.com/zemified_e.png?x-id=3e694932-eed3-4cf4-bfd8-55ea825bf715"></a></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-6616437408381272812010-08-29T09:40:00.001-07:002010-08-30T12:48:20.970-07:00iPAD – Multitouch Web (Part A)<div style="margin: 1em; width: 310px; display: block; float: right" class="zemanta-img"><a href="http://commons.wikipedia.org/wiki/File:IPad_docked.jpg"><img style="border-bottom: medium none; border-left: medium none; display: block; border-top: medium none; border-right: medium none" alt="iPad con dock y teclado inalámbrico" src="http://upload.wikimedia.org/wikipedia/commons/thumb/9/92/IPad_docked.jpg/300px-IPad_docked.jpg" width="300" height="405"></a> <p style="font-size: 0.8em" class="zemanta-img-attribution">Image via <a href="http://commons.wikipedia.org/wiki/File:IPad_docked.jpg">Wikipedia</a></p></div> <p><font size="3">I finally had the chance the test the new IPad device. I spent some time trying to figure out if it really worth spending 400 pounds on this device. Here are my findings: </font></p> <p align="justify"><font size="3">1) The device is <strong>excellent</strong> for gaming. It is perhaps one of the best gaming devices I’ve ever used. The integrated gyroscope means, that are no annoying arrow keys to use while playing games (all you have to do is to turn the device) </font></p> <p align="justify"><font size="3">2) For i-reading, although the screen is very clear the absence of integrated back stand (like Samsung’s UMPC devices) make it very hard to hold it for a very long time.</font></p> <p align="justify"><font size="3">3) From all the applications I tested I only found one of particular interest. It is an application that shows you the star constellations </font><font size="3">based on your geographical position. </font></p> <p align="justify">4)<font size="3"> The device has to flash support, which means that the majority of the WWW content is out of reach. Advocates of the device say that as the Web is progressively moving to the HTML 5.0 standard, that will soon stop be an issue. Advocates of Flash say that Flash can not die, as it is an integral part of the web. Who is right, and who is wrong only time will tell. For now all I can say is that if I buy the IPad, my favourite episodes of “Eureka” on Hulu are out of reach for good.</font></p> <p align="justify"><font size="3">5) The device is multi-touch, which means that it is very hard to operate on mouse-based web pages. As the majority of mobile platforms are now moving towards multi-touch, what does this mean for designers, IA, researchers and other stakeholders? Below I attempt to present some of the possible implications for designers, IA and researchers.</font></p> <p align="justify"><strong><font size="3">Design</font></strong></p> <ul> <li> <div align="justify"><font size="3">Size of Finger</font></div></li></ul> <p align="justify"><font size="3">Web pages on touch-sensitive devices are not navigated using a mouse. They are controlled with human fingers, many of which are much fatter than a typical mouse pointer. No matter what apple says about an<em> “ultimate browsing experience”</em> on IPad, clicking on small text links with your finger is painful, and sometimes practically impossible. As touch-sensitive devices become more popular, this could mean the end of traditional text-links and their replacement by big touchable buttons.</font> </p> <ul> <li> <div align="justify"><font size="3">Secondary Functions</font></div></li></ul> <p align="justify"><font size="3">The “fat finger” problem discussed above, and the limited screen estate, also mean that we can not cram thousand of features (or ads) in a tight frame as we would in a desktop web page. The design of web pages should be focus on the essential elements, and it should avoid wasting user attention on processing secondary functions.</font> </p> <ul> <li> <div align="justify"><font size="3">Hover effects and Right Clicks</font></div></li></ul> <p align="justify"><font size="3">Without a mouse-based interface, you can’t use any mouse over effects. Elements that we are so used to interact with on mouse-driven interfaces, like menus that pop-up when you hover you mouse over a link, or right clicks, do not exist. Apple has a number of replacements in place, like holding your finger down on a link to get a pop-up menu, but they only make clicking itself more complex.</font> </p> <ul> <li> <div align="justify"><font size="3">Horizontal Vs. Vertical Styling</font></div></li></ul> <p align="justify"><font size="3">Due to the ability to easily switch between vertical and horizontal orientations, web sites will have to automatically adapt their styling to look accordingly in both orientations. Seamless presentation in both landscape and portrait mode is one of the most fundamental guidelines when it come to designing for the IPad/IPhone devices.</font> </p> <ul> <li> <div align="justify"><font size="3">3D Objects</font></div></li></ul> <p align="justify"><font size="3">Apple is trying to push designs that immediate tangible things – real world interfaces that are easy to understand and familiar in their use. If you create a magazine application make it look like a real magazine, or if you make a word processor make it look like a type writer. Could iPad be a significant milestone towards a more three-dimensional WWW? Some web 3D applications like Second Life, could certainly benefit from the mouse-less interface, as touching and tilting make it much easier to interact with 3D worlds than mousing and keyboarding. In mainstream websites, 3D elements (e.g., material surfaces and SFX) will probably be used widely as an “invitation to touch”, but never as a basic metaphor.</font> </p> <p align="justify"><strong><font size="3">Information Architecture</font></strong></p> <p align="justify"><font size="3">Multi-touch presents a unique set of challenges for information architects. The limited screen size and the size of the human finger tip, means that limited number of actions needed to complete one task (its tiresome to swipe and touch too often) pushes the IA to create a dead simple architecture with minimal number of actions. <strong>Information aggregation</strong>, will play a very important role into creating architectures <strong>that minimize input and maximize output. </strong></font></p> <p align="justify"><font size="3">Under the above rule, several more “human-like” modalities of communication, such as speech recognition, text to speech processing, emotion recognition, natural language processing etc, are likely to find their place into the multi-touch web. Multi-touch seems to me the <strong>perfect </strong>vehicle towards the right direction, away from the dominance of GUI interfaces and towards a more natural way of interacting with computers. </font> </p> <p align="justify"><strong><font size="3">Continues in the next post</font></strong></p> <div style="margin-top: 10px; height: 15px" class="zemanta-pixie"><a class="zemanta-pixie-a" title="Enhanced by Zemanta" href="http://www.zemanta.com/"><img style="border-bottom-style: none; border-right-style: none; border-top-style: none; float: right; border-left-style: none" class="zemanta-pixie-img" alt="Enhanced by Zemanta" src="http://img.zemanta.com/zemified_e.png?x-id=9ca5558e-12c7-4ccf-aed8-3e21fbd1be3a"></a></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-72132069821640123772010-08-17T18:04:00.001-07:002010-08-26T05:29:01.131-07:00Cognitive Walkthrough - ICT Virtual Human Toolkit<p align="justify"><font size="3">As part of the MGUIDE project, I had to complete the cognitive walkthrough of</font><font size="3"> the </font><a href="http://vhtoolkit.ict.usc.edu/index.php/Main_Page"><font size="3">ICT Virtual Human Toolkit</font></a><font size="3">. This toolkit is a collection of the state-of-the-art-technologies including: speech recognition, automatic gesture generation, text to speech synthesis, 3D interfaces, dialogue model creation to name but a few. Current users of the toolkit include, CSI/UCB Vision Group at<strong> UC Berkeley</strong>, Component Analysis Lab at <strong>Carnegie Mellon University</strong>, Affective Computing Research group at <strong>MIT Media Lab </strong>and <strong>Microsoft Research.</strong></font></p> <p align="justify"><a href="http://vhtoolkit.ict.usc.edu/images/c/c4/Virtual_humans_characters.jpg"><img style="display: block; float: none; margin-left: auto; margin-right: auto" border="0" alt="File:Virtual humans characters.jpg" src="http://vhtoolkit.ict.usc.edu/images/c/c4/Virtual_humans_characters.jpg" width="521" height="343"></a></p> <p align="justify"><font size="3">An assemble of some of the characters created by the toolkit. </font></p> <p align="justify"><font size="3">Source: <a href="http://vhtoolkit.ict.usc.edu/index.php/File:Virtual_humans_characters.jpg">University of Southern California Institute for Creative Technologies</a></font></p> <p align="justify"><font size="3">The main idea behind the evaluation, was to provide usability insights on what is perhaps the most advanced platform for multimodal creation on the planet today. The process was completed successfully with 2 experts, and revealed a number of insights that were documented carefully. These insights will be fed to the design of the Talos Toolkit – among the MGUIDE deliverables was an authoring toolkit to aid the rapid prototyping of multimodal applications with virtual humans. Talos is currently just an architecture (see <a href="http://virtual-guide-systems.blogspot.com/search/label/Information%20Architecture">here</a>), but the walkthrough of the ICT toolkit provided some valuable insights that should guide its actual design. However the MGUIDE project was completed, with the development of Talos set for the future goals of the project.</font></p> <p align="justify"><font size="3">I applied the cognitive walkthrough, <strong>exactly</strong> as I would applied it in any other project. I performed a <strong>task analysis</strong> first (i.e., i established the tasks I wanted to perform with the toolkit I broke them into actions) and then, I asked the following questions at each step: </font></p> <blockquote> <p align="justify"><font size="3">1) Will the customer realistically be trying to do this action?</font></p> <p align="justify"><font size="3">2) Is the control for the action visible?</font></p> <p align="justify"><font size="3">3) Is there a strong link between the control and the action?</font></p> <p align="justify"><font size="3">4) Is feedback appropriate? </font></p></blockquote> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-78259815165283084352010-08-09T14:38:00.001-07:002010-08-28T05:59:32.691-07:00Ultimate – A prototype search engine<p align="justify"><font size="3">Lately, I have been experimenting again with <strong>Axure</strong> on a prototype search engine called <strong>Ultimate</strong>. The engine is based on actual <strong>user requirements</strong> collected through a focus group study. I decided to prototype Ultimate, in order to perfect my skills in Axure. The tool enabled me to construct a high-fidelity and fully functional prototype within a few hours. Some of the features of the engine are:</font></p> <ul sizset="0" sizcache="279"> <li sizset="0" sizcache="279"> <div align="justify" sizset="0" sizcache="279"><font size="3" sizset="0" sizcache="279">It relies on full <a class="zem_slink" title="Natural language processing" href="http://en.wikipedia.org/wiki/Natural_language_processing" rel="wikipedia">natural language processing</a> to understand the user’s input. </font></div> <li sizset="1" sizcache="279"> <div align="justify" sizset="1" sizcache="279"><font size="3" sizset="1" sizcache="279">The search algorithm is based on a complex network of <a class="zem_slink" title="Software agent" href="http://en.wikipedia.org/wiki/Software_agent" rel="wikipedia">software agents</a> – automated software robots programmed to complete tasks (e.g., monitor the prices of 200 airlines, get ratings from tripadvisor.co.uk, etc) - to deliver accurate and user-tailored results.</font></div> <li> <div align="justify"><font size="3">Some of the engine functionalities are discussed in the user journey shown below. The full functionalities are well documented, but for obvious reasons I can not discuss them in this post.</font></div></li></ul> <p align="justify"><font size="3"><strong>Screenshots:</strong></font></p> <div align="center" sizset="0" sizcache="281"> <table border="0" cellspacing="0" cellpadding="2" width="400" align="center" sizset="0" sizcache="281"> <tbody sizset="0" sizcache="281"> <tr sizset="0" sizcache="281"> <td valign="top" width="200" sizset="0" sizcache="281"> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:8747F07C-CDE8-481f-B0DF-C6CFD074BF67:aad92881-7837-448a-83ef-cb910e2ad086" class="wlWriterEditableSmartContent" sizset="0" sizcache="281"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TGB1Q-PtR1I/AAAAAAAAAdA/ICwdm_Tvvhg/U1-8x6.jpg?imgmax=800" title="Main User Screen" rel="thumbnail"><img border="0" src="http://lh6.ggpht.com/_ozsnJVts-XM/TGB1Rdf0n6I/AAAAAAAAAdE/MF_YSu61nmk/U1%5B5%5D.png?imgmax=800" width="250" height="197" /></a></div></td> <td valign="top" width="200" sizset="1" sizcache="281"> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:8747F07C-CDE8-481f-B0DF-C6CFD074BF67:afde637c-791f-4e8e-af65-595453c0126f" class="wlWriterEditableSmartContent" sizset="1" sizcache="281"><a href="http://lh4.ggpht.com/_ozsnJVts-XM/TGB1Sd2etoI/AAAAAAAAAdI/l3a7_oN41PE/U3-8x6.jpg?imgmax=800" title="What was actually understood! " rel="thumbnail"><img border="0" src="http://lh6.ggpht.com/_ozsnJVts-XM/TGB1ShUX40I/AAAAAAAAAdM/Jb-J8kcckqU/U3%5B5%5D.png?imgmax=800" width="250" height="218" /></a></div></td></tr> <tr sizset="2" sizcache="281"> <td valign="top" width="200" sizset="2" sizcache="281"> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:8747F07C-CDE8-481f-B0DF-C6CFD074BF67:406db3c9-9c4c-4ed4-8f5d-e1f195436895" class="wlWriterEditableSmartContent" sizset="2" sizcache="281"><a href="http://lh4.ggpht.com/_ozsnJVts-XM/TGB1TXmesqI/AAAAAAAAAdQ/Ox3Lnjwt3Ro/U5-8x6.jpg?imgmax=800" title="Complete holiday package" rel="thumbnail"><img border="0" src="http://lh4.ggpht.com/_ozsnJVts-XM/TGB1Ty48LaI/AAAAAAAAAdU/LvIcYG4Dm_c/U5%5B6%5D.png?imgmax=800" width="250" height="200" /></a></div></td> <td valign="top" width="200" sizset="3" sizcache="281"> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:8747F07C-CDE8-481f-B0DF-C6CFD074BF67:377f49bb-d6a3-4be7-90a6-8d9d03ee57dd" class="wlWriterEditableSmartContent" sizset="3" sizcache="281"><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TGB1UdNtrwI/AAAAAAAAAdY/zdCplwCpbJc/U4-8x6.jpg?imgmax=800" title="Exit Screen" rel="thumbnail"><img border="0" src="http://lh5.ggpht.com/_ozsnJVts-XM/TGB1UkX-biI/AAAAAAAAAdc/ahOZ6PH-lQ0/U4%5B3%5D.png?imgmax=800" width="250" height="200" /></a></div></td></tr></tbody></table></div> <div align="center" sizset="0" sizcache="281"> </div> <div align="left" sizset="0" sizcache="281"><strong>User Journey:</strong></div> <div align="center" sizset="0" sizcache="281"> </div> <div align="center"><object style="width:420px;height:566px" ><param name="movie" value="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf?mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100828125714-8b5759379a5a4e5d91870fba3933fe1e&docName=ultimate&username=giannis_&loadingInfoText=Ultimate%20-%20User%20Journey&et=1283000293171&er=46" /><param name="allowfullscreen" value="true" /><param name="menu" value="false" /><embed src="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf" type="application/x-shockwave-flash" allowfullscreen="true" menu="false" style="width:420px;height:566px" flashvars="mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100828125714-8b5759379a5a4e5d91870fba3933fe1e&docName=ultimate&username=giannis_&loadingInfoText=Ultimate%20-%20User%20Journey&et=1283000293171&er=46" /></object></div> <div> </div> <div> <font size="3"><strong>Research:</strong></font></div> <p align="justify"><font size="3">Run a usability testing of the above design against a more “conventional” search engine (e.g., <a href="http://www.skyscanner.net/"><font size="3">Skyscanner</font></a> ), and I am certain that the results will show the clear superiority of Ultimate. Of course, there is the need for a careful usability study in order to compare the designs but I am certain that Ultimate is superior in all usability metrics.</font></p> <p align="justify"><font size="3"></font> </p> <p align="justify"><font size="3"> </font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com1tag:blogger.com,1999:blog-1676610130059575298.post-30359202623825710262010-08-06T08:54:00.001-07:002010-08-19T06:32:09.682-07:00Bespoke Research Solutions<p align="justify"><font size="3">The research methodologies discussed in previous posts provide an excellent basis to start from, but the web is changing rapidly. Rich Internet Applications (Flash, Silverlight, Ajax, etc), new interaction methods (e.g., multi-touch, gesture recognition, etc) will flood the web are already here. Can we apply what we know in terms of user- research in these new environments? Consider as an example a corporate web site, a mobile artificial intelligent assistant, and an RIA application. <font size="3">Conducting a usability study in the first scenario, is perhaps straightforward. However what techniques and measures are the most relevant to the second and third scenarios? What can we apply in order to ensure that we can indeed gather <strong>rich</strong> user insights? What are the most relevant tools to deploy? - Siri is a mobile application, while the rest are desktop applications. Unfortunately I don’t have the answers, as I have never attempted a similar study before. I can only imagine that existing techniques would have to be <strong>tailored</strong> to the unique and complex range of variables. Therefore, research <strong>adaptation</strong> is far more important that the domain itself. </font></font></p> <p align="justify"><font size="3"><strong>Present:</strong></font></p> <div align="center"> <table border="0" cellspacing="0" cellpadding="2" width="400" align="center"> <tbody> <tr> <td valign="top" width="133"><a href="http://lh6.ggpht.com/_ozsnJVts-XM/TGh0RqE3SKI/AAAAAAAAAds/tRfO_MGa2d4/s1600-h/halcrow_home_tcr%5B1%5D.jpg"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="halcrow_home_tcr" border="0" alt="halcrow_home_tcr" src="http://lh5.ggpht.com/_ozsnJVts-XM/TFwwMOD4MVI/AAAAAAAAAdw/h8oBYoh9mWw/halcrow_home_tcr_thumb.jpg?imgmax=800" width="193" height="202"></a></td> <td valign="top" width="133"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TFwy5_WRrNI/AAAAAAAAAd0/PUTLQ0NKq4w/s1600-h/siriiphoneapp%5B2%5D.jpg"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; margin-left: 0px; border-left-width: 0px; margin-right: 0px" title="siri-iphone-app" border="0" alt="siri-iphone-app" align="left" src="http://lh3.ggpht.com/_ozsnJVts-XM/TFwy6kKwaaI/AAAAAAAAAd4/XnDyA_cJ2yc/siriiphoneapp_thumb%5B1%5D.jpg?imgmax=800" width="193" height="202"></a></td> <td valign="top" width="133"><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TFwwM5FnhQI/AAAAAAAAAeE/XK8Xi4PEkrA/s1600-h/1%5B2%5D.jpg"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="1" border="0" alt="1" src="http://lh5.ggpht.com/_ozsnJVts-XM/TFwwNLD8fDI/AAAAAAAAAeI/Jn54TaYxW98/1_thumb%5B1%5D.jpg?imgmax=800" width="193" height="202"></a></td></tr> <tr> <td valign="top" width="133">Source: <a href="http://www.halcrow.com/">HalCrow</a></td> <td valign="top" width="133">Source: <a href="http://www.zatznotfunny.com/2010-02/the-iphone-apps-of-the-week/">Some Blog</a></td> <td valign="top" width="133">Source: <a href="http://www.worldwidetelescope.org/webclient/">Microsoft</a></td></tr></tbody></table></div> <p align="justify"><font size="3"><strong>Future (Aurora) ??:</strong></font></p> <p align="justify"><font size="3">I like thinking about the future, a lot! Aurora from Mozila labs, is a project that aims to redesign the way we browse the web. Currently, It is merely a concept, but it gives a pretty good idea of how the future will be like. There is an excellent critique of the Aurora project <a href="http://www.michaelcritz.com/2008/08/07/aurora-less-really-is-more/">here</a>. If the future will be similar to Aurora (i.e., reinventing the user interface), what does this mean for user-research? We need to <strong>fundamentally</strong> re-think the way that we conduct user research. New techniques will have to be invented and existing ones revisited. <strong>Innovation</strong> and <strong>creativity</strong>, will distinguish the companies that survive from the ones that will go out of business.</font></p> <p align="center"><object width="400" height="225"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=1450211&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=&fullscreen=1&autoplay=0&loop=0" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=1450211&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=&fullscreen=1&autoplay=0&loop=0" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="225"></embed></object></p> <p><a href="http://vimeo.com/1450211">Aurora (Part 1)</a> from <a href="http://vimeo.com/adaptivepath">Adaptive Path</a> on <a href="http://vimeo.com">Vimeo</a>.</p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-22717980215989280952010-08-05T09:45:00.001-07:002010-08-15T17:07:01.695-07:00User Research Deliverables<p align="justify"><font size="3">Different institutes require user research deliverables to be formatted differently. In MGUIDE I had to provide both within the allocated budget and timeframes:</font></p> <p align="justify"><img style="display: block; float: none; margin-left: auto; margin-right: auto" src="http://t1.gstatic.com/images?q=tbn:ANd9GcTjwHAvQRXAFFC0NNmUjOO-yEnC1_CIauRYqorNSpuoyfWkYrI&t=1&usg=__PDi8sg63viR8ub5ZbP3428L8n0o=" width="259" height="194"><font size="3"><strong></strong></font></p> <p align="justify"><font size="3"><strong>Industry: </strong></font><font size="3">Usability reports with actionable recommendations were delivered to each of the companies involved in the MGUIDE project. Each company wanted user-insights on the particular piece of technology that contributed to the project. For example, the following recommendation was of particular interest to the text-to-speech company:</font></p> <p align="justify"><strong><font size="3">Actionable Recommendations:</font></strong></p> <table border="0" cellspacing="0" cellpadding="2" width="611"> <tbody> <tr> <td valign="top" width="609"> <ul> <li><font size="3">Provide a visible and easy to use method for users to decrease/increase the rate of the Text-to-Speech output while the application speaks.</font></li></ul> <p><font size="3">-Users will likely find the output more natural and easy to understand</font></p></td></tr></tbody></table> <p align="justify"><font size="3"><strong>Note:</strong><em> I can not release any detailed research findings as they are the property of the institutes that supported the project.</em></font></p> <p align="justify"><font size="3"><strong>Personas: </strong> </font><font size="3">Personas are a technique used to summarize user-research findings. In my understanding, personas are made-up people used to represent major segments of a product’s target audience. There is an excellent explanation of personas <a href="http://www.webcredible.co.uk/user-friendly-resources/web-usability/personas.shtml">here</a>. Personas are easy to construct, and a great way to distil research findings into a simple and accessible form.</font></p> <p align="justify"><img alt="" src="http://www.webcredible.co.uk/i/persona.gif"></p> <p align="justify"><font size="3">Source: </font><a href="http://www.webcredible.co.uk/user-friendly-resources/web-usability/personas.shtml"><font size="3">WebCredible</font></a></p> <div align="justify"><font size="3"><strong>Academia: </strong></font><font size="3">In academia statistical significance is of major importance. I presented deliverables in similar format to those above, but accompanied by statistics in the proper format</font><font size="3"> (e.g., F (1, 14) = 7.956;<i> p < 0.05</i>). Statistics appear to be of little interest to the industry though.</font></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-48431358240147926192010-08-05T06:48:00.001-07:002010-08-31T10:59:22.355-07:00Transferable Research Skills (Part B)<p align="justify"><font color="#0000ff" size="3"><strong>Remote Usability Testing:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="0"> <tbody sizcache="22" sizset="0"> <tr sizcache="22" sizset="0"> <td valign="top" width="80" sizcache="22" sizset="0"><img src="http://www.allthingsdistributed.com/images/globe-europe.jpg" width="128" height="128"></td> <td valign="top" width="524" sizcache="21" sizset="0"> <p align="justify" sizcache="21" sizset="0"><font size="3" sizcache="21" sizset="0">Remote testing, is about conducting usability testing without having participants come into the lab. Although there are several tools and web services on the market, I prefer to work with <a href="http://www.userfeel.com/">userfeel</a> because of the low cost, and their massive network of testers from all over the globe. </font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong>A/B and Multivariate Testing:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="1"> <tbody sizcache="22" sizset="1"> <tr sizcache="22" sizset="1"> <td valign="top" width="80" sizcache="22" sizset="1"><img src="http://endoseo.com/Portals/0/endo-AB-test.jpg" width="128" height="113"></td> <td valign="top" width="524"> <p align="justify"><font size="3">A/B and multivariate testing, is about testing different versions of the same design, in order to see which performs the best. I use this technique in <strong>all of my</strong> usability tests, either by differentiating my designs across one variable (i.e., A/B testing) or more (i.e., multivariate testing). </font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong>Co-Discovering Learning:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="1"> <tbody sizcache="22" sizset="1"> <tr sizcache="22" sizset="1"> <td valign="top" width="80" sizcache="22" sizset="1"><img src="http://t3.gstatic.com/images?q=tbn:SaFvKkHqPbNb0M:http://www.masternewmedia.org/images/mlearning_example2o.jpg&t=1" width="128" height="96"></td> <td valign="top" width="524"> <p align="justify"><font size="3">My approach to co-discovering learning, is as follows: I usually ask two or more users to perform a task together, while I observe them. I encourage them to converse and interact with each other to create a “team spirit”. In some cases, I also allow note taking (e.g., when the content is technical/complex). The technique can yield some really powerful results, as it is more natural for users to verbalise their thought during the test.</font></p></td></tr></tbody></table> <p> <p><font color="#0000ff" size="3"><strong>Participatory Design:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="1"> <tbody sizcache="22" sizset="1"> <tr sizcache="22" sizset="1"> <td valign="top" width="80" sizcache="22" sizset="1"><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TH1C9HwNnaI/AAAAAAAAAgg/N27rHM3vriY/s1600-h/image3.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="http://lh5.ggpht.com/_ozsnJVts-XM/TH1C-E9Xl7I/AAAAAAAAAgk/nco1D_swGCw/image_thumb1.png?imgmax=800" width="132" height="118"></a> </td> <td valign="top" width="524"> <p align="justify"><font size="3">Participatory design, is about involving users into the design and decision-making process into an iterative cycle of designing and evaluation. I usually conduct a <strong>short</strong> participatory design session, prior all of my usability evaluations. In these sessions, the usability issue of a prototype system are determined, and the changes to accommodate for these issues are made. The refined system is then used in the actual usability evaluation.</font></p></td></tr></tbody></table> <p> <hr> </p> <p></p> <p></p> <p align="justify"><font color="#000000" size="3"><strong>A4.0 Inspection Methods</strong></font></p> <p align="justify"><font color="#0000ff" size="3"><strong>Cognitive Walkthrough:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="2"> <tbody sizcache="22" sizset="2"> <tr sizcache="22" sizset="2"> <td valign="top" width="80" sizcache="22" sizset="2"><img title="Cognitive Walkthrough" alt="Cognitive Walkthrough" align="left" src="http://www.ergosign.de/images/uploads/expert_review_cognitive%20walkthrough.png"></td> <td valign="top" width="524" sizcache="21" sizset="1"> <p align="justify" sizcache="21" sizset="1"><font size="3" sizcache="21" sizset="1">The <a class="zem_slink" title="Cognitive walkthrough" href="http://en.wikipedia.org/wiki/Cognitive_walkthrough" rel="wikipedia">cognitive walkthrough</a> is a method of “quick and dirty” usability testing requiring a number of <strong>expert</strong> evaluators. A list of tasks and the actions to complete them is created. The evaluators step through each task, action by action, noting down problems and difficulties as they go. I can use cognitive walkthroughs on a number of digital interfaces, ranging from web sites to complex authoring toolkits. </font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong>Heuristic Evaluation:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="3"> <tbody sizcache="22" sizset="3"> <tr sizcache="22" sizset="3"> <td valign="top" width="80" sizcache="22" sizset="3"><img title="Heuristic Evaluation" alt="Heuristic Evaluation" align="left" src="http://www.ergosign.de/images/uploads/expert_review_heuristic_analysis.png"></td> <td valign="top" width="524" sizcache="21" sizset="2"> <p align="justify" sizcache="21" sizset="2"><font size="3" sizcache="21" sizset="2">Heuristic evaluation is about judging the compliance of an interface against a number of recognized usability principles (i.e., the Heuristics). I used this method extensively in the evaluation of <a href="http://virtual-guide-systems.blogspot.com/2010/07/project-management-e-learning-projects.html">e-learning prototypes</a> during my teaching at Middlesex University. </font></p></td></tr></tbody></table> <p align="justify"> <hr> <p></p> <p align="justify"><font color="#000000" size="3"><strong>A5.0 Advanced Usability Techniques (in training)</strong></font></p> <p align="justify"><font color="#0000ff" size="3"><strong>Eye Tracking:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="23" sizset="4"> <tbody sizcache="23" sizset="4"> <tr sizcache="23" sizset="4"> <td valign="top" width="80" sizcache="23" sizset="4"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TGsTfSU49aI/AAAAAAAAAfE/5w1EbGGd1UA/s1600-h/heatmap_lightbox%5B1%5D.jpg" sizcache="22" sizset="4"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="heatmap_lightbox" border="0" alt="heatmap_lightbox" src="http://lh5.ggpht.com/_ozsnJVts-XM/TGRLFB-kj5I/AAAAAAAAAfI/4BQoxCOq5_Y/heatmap_lightbox_thumb.jpg?imgmax=800" width="132" height="147"></a> </td> <td valign="top" width="524"> <p align="justify"><font size="3">Eye tracking is a technique that pinpoints where the users look on a system and for how long. I am currently talking<strong> </strong>with Middlesex University in order to get <strong>training</strong> on using eye-tracking</font><font size="3"> as a usability testing technique. We plan to conduct a series of eye-tracking session in Middlesex state-of-the-art usability labs, using the MGUIDE prototypes. </font></p></td></tr></tbody></table> <p><font color="#0000ff" size="3"><strong>Emotion Recognition & Eye Tracking</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="6"> <tbody sizcache="22" sizset="6"> <tr sizcache="22" sizset="6"> <td valign="top" width="80" sizcache="22" sizset="6"><font size="4" sizcache="22" sizset="6"><img src="http://www.science.uva.nl/research/publications/2007/ValentiVMDL2007/emotion2.bmp" width="128" height="113"></font></td> <td valign="top" width="524" sizcache="21" sizset="5"> <p align="justify" sizcache="21" sizset="5"><font size="4" sizcache="21" sizset="5"><font size="3">This is a technique I developed during the MGUIDE project. I discuss it in detail </font><a href="http://virtual-guide-systems.blogspot.com/2010/07/quantitative-user-research.html"><font size="3">here</font></a><font size="3">. It was developed with avatar-based interfaces/presentation systems in mind, but it is universal in nature. It is based on the hypothesis that the perceived accessibility of a system’s content is evident in the user's <strong>emotional expressions.</strong> The combined “Emotion Recognition and Eye-tracking” technique will be validated in a lab-based study that will be performed at Middlesex University.</font></font></p></td></tr></tbody></table> <p> <hr> <p></p> <p><font color="#000000" size="3"><strong>A5.0 Audits</strong></font></p> <p><font color="#0000ff" size="3"><strong>Accessibility Audit:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606" sizcache="22" sizset="5"> <tbody sizcache="22" sizset="5"> <tr sizcache="22" sizset="5"> <td valign="top" width="80" sizcache="22" sizset="5"><img src="http://www2.warwick.ac.uk/services/skills/rssp/eportfolio/editing/content/accessibility/accessibility.gif" width="128" height="125"></td> <td valign="top" width="524" sizcache="21" sizset="4"> <p align="justify" sizcache="21" sizset="4"><font size="3" sizcache="21" sizset="4">In accessibility audit, an expert checks the compliance of a web site with established guidelines and metrics. The <acronym>W3C</acronym><acronym> WAI are the most widely used guidelines in accessibility audits</acronym>. My approach for accessibility evaluation is framework-based (see <a href="http://virtual-guide-systems.blogspot.com/2010/07/accessibility-evaluation-methods.html">here</a>), but a) I haven’t applied my framework with disabled users and b) the W3C WAI heuristics are very well established. Although I have <strong>a good knowledge</strong> of the W3C WAI heuristics, I have never performed an accessibility audit before.</font></p></td></tr></tbody></table> <p align="justify"><font color="#000000" size="3"><strong></strong></font> </p> <p><strong><u><font size="4"></font></u></strong></p> <div style="margin-top: 10px; height: 15px" class="zemanta-pixie"><a class="zemanta-pixie-a" title="Enhanced by Zemanta" href="http://www.zemanta.com/"><img style="border-bottom-style: none; border-right-style: none; border-top-style: none; float: right; border-left-style: none" class="zemanta-pixie-img" alt="Enhanced by Zemanta" src="http://img.zemanta.com/zemified_e.png?x-id=3e694932-eed3-4cf4-bfd8-55ea825bf715"></a></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-72311015624319548322010-08-04T17:36:00.001-07:002010-08-31T11:44:35.543-07:00Transferable Research Skills (Part A)<p align="justify"><font size="3">MGUIDE is my most up to date research work, and I am very proud of what I have accomplished. However, I’ve become eager to <strong>outgrow</strong> the domain and transfer my research skills to the digital media world. I am interested in any form of digital interactive applications (websites, social networks, interactive-tv, search engines, games, etc). I am highly experienced in using the following techniques for user research:</font> </p> <p align="justify"><strong><font color="#000000" size="4"><u>A: User Research</u></font></strong></p> <p align="justify"><strong><font color="#000000" size="3">A1.0 Quantitative Research</font></strong></p> <p align="justify"><font color="#0000ff" size="3"><strong>Surveys/Questionnaires (Online and Offline):</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606"> <tbody> <tr> <td valign="top" width="80"><img src="http://www.hkadesigns.co.uk/websites/msc/reme/images/likert.gif" width="128" height="113"></td> <td valign="top" width="524"> <p align="justify"><font size="3">Post-test and pre-test questionnaires provide real insights into user needs, wants and thoughts. I use <strong>powerful</strong> statistics (e.g., <a href="http://en.wikipedia.org/wiki/Cronbach's_alpha">Cronbach's Alpha</a>) to ensure that the questionnaires I create, are both reliable and valid. I can apply these skills into any domain with minimum adaptation time.</font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong>Performance Measures:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606"> <tbody> <tr> <td valign="top" width="80"><img src="http://img.domaintools.com/blog/dt-improved-performance.jpg" width="128" height="113"></td> <td valign="top" width="524"> <p align="justify"><font size="3">Performance measures, like for example, <strong>time</strong> to complete a task, the number of <strong>errors</strong> conducted, <strong>scores</strong> in retention tests, etc, provide strong indication of how easily people can achieve tasks with a system. If this data are correlated with other objective or subjective measures they can provide deeper user insights than surveys/questionnaires alone. </font></p></td></tr></tbody></table> <p align="justify"> <p align="justify"><strong><font color="#0000ff" size="3">Log File Analysis:</font></strong></p> <p></p> <table border="0" cellspacing="0" cellpadding="2" width="606"> <tbody> <tr> <td valign="top" width="80"><img src="http://www.ghacks.net/wp-content/uploads/2009/09/apache_log_analyzer-500x331.jpg" width="128" height="113"></td> <td valign="top" width="524"> <p align="justify"><font size="3">A log is a file that lists actions that have occurred. Both quantitative and qualitative data can be stored in a log file for later analysis. I use device/system logs to automatically collect data such as time to complete a task, items selected on the interface, keys pressed etc.</font> </p></td></tr></tbody></table> <hr> <p><strong><font color="#000000" size="3">A2.0 Qualitative Research:</font></strong></p> <p align="justify"><font size="3"><strong><font color="#0000ff">Focus Groups:</font></strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="614"> <tbody> <tr> <td valign="top" width="80"><img src="http://image.shutterstock.com/display_pic_with_logo/211147/211147,1260220520,1/stock-vector-two-business-persons-speaking-to-each-other-speech-bubble-over-him-42421891.jpg" width="128" height="113"></td> <td valign="top" width="532"> <p align="justify"><font size="3">I mainly use focus groups for requirements gathering, either through the introduction of new ideas and discussion and/or the evaluation of low-fidelity prototypes. </font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong>Direct Observation:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="612"> <tbody> <tr> <td valign="top" width="80"><img src="http://www.experientia.com/blog/uploads/ethnographic_hits.jpg" width="128" height="113"></td> <td valign="top" width="530"> <p align="justify"><font size="3">One of the most common techniques for collecting data in an ethnographic study is direct, first-hand observation of participants. I am experienced in using direct observation for note taking in both indoor and outdoor environments.I find gaining an understanding of users through first-hand observation of their behaviour while they use a digital system, genuinely exciting. During my work in MGUIDE direct observation was used to uncovered a number of interesting user-insights that were then correlated with user views collected from the questionnaires. </font></p></td></tr></tbody></table> <p align="justify"><strong><font color="#0000ff" size="3"><font size="3">User Interviews & Contextual inquiry:</font></font></strong></p> <table border="0" cellspacing="0" cellpadding="2" width="612"> <tbody> <tr> <td valign="top" width="80"><img src="http://t3.gstatic.com/images?q=tbn:g1kPfL8j1w3lCM:http://janeconstant.tripod.com/Interview.gif&t=1" width="128" height="113"></td> <td valign="top" width="530"> <p align="justify"><font size="3">Other common ethnographic techniques are user interviews and contextual Inquiry. I use extensively open-ended interviews i.e., interviews where the interviewees are all asked the same-open ended questions, in both <strong>field</strong> and <strong>lab</strong> conditions</font><font size="3">. I like the particular style as it is faster and can be more easily analysed and correlated with other data. </font></p></td></tr></tbody></table> <p align="justify"><strong><font color="#0000ff" size="3"><font size="3">Think Aloud Protocol:</font></font></strong></p> <table border="0" cellspacing="0" cellpadding="2" width="612"> <tbody> <tr> <td valign="top" width="80"><img src="http://www.chessmotifs.com/images/Chess-Designs/Thinking-Aloud.jpg" width="128" height="113"></td> <td valign="top" width="530"> <p align="justify"><font size="3">Think-aloud is a technique for gathering data during a usability testing session. It involves participants thinking aloud as they are performing a set of specified tasks. I have used think-aloud very successfully in <strong>navigation</strong> tasks, where participants had to verbalise their answers to navigation problems as those presented by two interactive systems.</font></p></td></tr></tbody></table> <hr> <p align="justify"><font color="#000000" size="3"><strong>A3.0 Quantitative & Qualitative Research</strong></font></p> <p align="justify"><font color="#0000ff" size="3"><strong>Usability and Accessibility testing:</strong></font></p> <table border="0" cellspacing="0" cellpadding="2" width="606"> <tbody> <tr> <td valign="top" width="80"><a href="http://lh4.ggpht.com/_ozsnJVts-XM/TFoHhCEROrI/AAAAAAAAAfQ/yptH6Bk1bRE/s1600-h/usabilitylabs%5B1%5D.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="usabilitylabs" border="0" alt="usabilitylabs" src="http://lh3.ggpht.com/_ozsnJVts-XM/TFoHhusOirI/AAAAAAAAAfU/YwiNGitPjRk/usabilitylabs_thumb.png?imgmax=800" width="132" height="117"></a> </td> <td valign="top" width="524"> <p align="justify"><font size="3">Lab-based and Field-based testing are the most effective ways of revealing usability and accessibility issues. I am experienced in conducting and managing lab and field testing. I use scenario-based quantitative and qualitative methods for my research.</font></p></td></tr></tbody></table> <p align="justify"><font color="#0000ff" size="3"><strong></strong></font></p> <p></p> <p></p> <p><font size="3"><strong>Continues in the next post</strong></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-52732008999320584762010-08-02T17:48:00.001-07:002010-08-05T09:48:21.616-07:00Universality of Research Methods & Techniques<p align="justify"><font size="3">I thought that the universality of methods for research was a fundamental fact of modern science. Isn’t it obvious that having successfully applied quantitative/qualitative research in one domain means that your skills can be applied to any other domain with minimal adaptation time? Is there a real difference between applying qualitative research in a complex avatar-system like MGUIDE and an e-commerce web site? For example, If you apply techniques like unstructured interviews wouldn’t you follow the same principles to design the interviews in both domains? </font></p> <p align="justify"><font size="3">Or even using more complex techniques like eye tracking and emotion recognition, aren’t these domain-independent? </font><font size="3">Consider for instance, my combined <a href="http://virtual-guide-systems.blogspot.com/2010/07/quantitative-user-research.html">emotion recognition + face detection technique for accessibility research</a>, described in the previous post. The technique was developed with avatar-based interfaces/presentation systems in mind. Adapting the technique to different domains is a matter of defining the aspects of the interface you wish to research. The quantitative data that you will collect are the same (emotion intensities, etc), the qualitative of course will differ because the interfaces are different. In general once you establish the objectives/goals of the research, </font><font size="3">deciding which techniques you will use (and modifying them if necessary to suit your needs) is easy and the process is domain-independent.</font></p> <p align="center"><font size="3"><a href="http://www.flickr.com/photos/26898179@N03/2516939380"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="2516939380_79f2e5dcf6" border="0" alt="2516939380_79f2e5dcf6" src="http://lh4.ggpht.com/_ozsnJVts-XM/TFiWV6ZsIEI/AAAAAAAAAYs/Tzdib2gsPP0/2516939380_79f2e5dcf6%5B6%5D.jpg?imgmax=800" width="244" height="184"></a> <a href="http://www.pluggd.in/internet-ads-usability-eye-tracking-study-297/"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="eyetracking-study-heat-map" border="0" alt="eyetracking-study-heat-map" src="http://lh6.ggpht.com/_ozsnJVts-XM/TFiWwmsvKGI/AAAAAAAAAY0/k7xnNH2G0Fo/eyetracking-study-heat-map%5B3%5D.jpg?imgmax=800" width="244" height="213"></a> </font></p> <p align="center"><font size="3">Eye tracking used in completely different contexts: a) a 3D avatar-based world and b) a web page.</font></p> <p align="justify"><font size="3">I am not sure why some people insist otherwise and focus so much on the subject matter. I have to agree that having expertise in a certain area, means that you can produce results fairly quickly. However this is a process easily learnt. Is domain expertise the most important quality a user researcher should have? Or should he perhaps have a solid research skills-set to start from and the willingness <strong>to learn more about established techniques</strong> and <strong>explore new</strong> <strong>ones?</strong></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-84418123872853002422010-07-31T18:26:00.000-07:002010-08-16T15:39:35.493-07:00Video Games & Online Games<p align="justify"><font size="3">This post in an attempt to disambiguate the domain of virtual humans. Most people have never heard the term “Virtual Human” before, but they all play games (online or offline) and they all have interacted with some limited form of a VH on the web.</font></p> <p align="justify"><font size="3"><font size="3">Computer games (online and offline) are the <strong>closest</strong> thing to the domain of Virtual humans. </font></font></p> <p align="justify"><font size="3"><strong><u>Online games (e-gaming)</u></strong></font></p> <p align="justify"><font size="3">You can argue that online games are much simpler than video games, but they are progressively getting much more complicated. As in video games, fully-fleshed avatars are widely used to immerse the player into the scenario. Below is an example of a poker game I found from a company called PKR. Notice the use of body language, face expressions, etc to create a fully realistic poker simulation. </font></p> <div style="padding-bottom: 0px; padding-left: 0px; width: 425px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:5737277B-5D6D-4f48-ABFC-DD9C333F4C5D:a3a0d916-d56e-468e-bc82-f162e3f2da46" class="wlWriterEditableSmartContent"><div id="8ff34c84-a9ca-4a67-b650-2cf991fabf7a" style="margin: 0px; padding: 0px; display: inline;"><div><a href="http://www.youtube.com/watch?v=6-FOvJqU7PQ?fs=1&hl=en_GB" target="_new"><img src="http://lh5.ggpht.com/_ozsnJVts-XM/TGm9OeqyHgI/AAAAAAAAAe0/2x16oaNu_tg/videoc4ff3bd5458d%5B8%5D.jpg?imgmax=800" style="border-style: none" galleryimg="no" onload="var downlevelDiv = document.getElementById('8ff34c84-a9ca-4a67-b650-2cf991fabf7a'); downlevelDiv.innerHTML = "<div><object width=\"425\" height=\"355\"><param name=\"movie\" value=\"http://www.youtube.com/v/6-FOvJqU7PQ?fs=1&hl=en_GB&hl=en\"><\/param><embed src=\"http://www.youtube.com/v/6-FOvJqU7PQ?fs=1&hl=en_GB&hl=en\" type=\"application/x-shockwave-flash\" width=\"425\" height=\"355\"><\/embed><\/object><\/div>";" alt=""></a></div></div></div> <p align="justify"><font size="3"><strong><u>Video Games:</u></strong></font></p> <p align="justify"><font size="3">Below is a screenshot from my favourite game Mass Effect: </font></p> <p align="justify"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TFi-Vu2OVQI/AAAAAAAAAZY/6p7bTly2uKQ/s1600-h/ME2%20choices%5B2%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="ME2%20choices" border="0" alt="ME2%20choices" src="http://lh6.ggpht.com/_ozsnJVts-XM/TFi-Vyo203I/AAAAAAAAAZc/t0Y_CnJnSxI/ME2%20choices_thumb.jpg?imgmax=800" width="244" height="184"></a><font size="3"> Source: </font><a href="http://www.jpbrown.co.uk/reviews.html"><font size="3">http://www.jpbrown.co.uk/reviews.html</font></a> </p> <p align="justify"><font size="3">Notice the use of dialogue wheels to simulate dialogues between the avatars. There is an excellent analysis of the particular style of conversation <a href="http://killspeak.lucasrizoli.com/tag/dialogue/">here</a>: </font></p> <p align="justify"><font size="3">However, in contrast to most current video games Virtual humans engage players in actual dialogue, using speech recognition, dialogue system technology, and emotional modelling to deepen the experience and make it more entertaining. Such technologies have started only recently to find their way into video games. Tom Clansy’s End War game is using speech recognition to allow users to give commands to their armies. </font></p> <p align="justify"><a href="http://lh6.ggpht.com/_ozsnJVts-XM/TFi81SREGEI/AAAAAAAAAZE/2Xlh22x5cak/s1600-h/endwar-beta-02%5B2%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="endwar-beta-02" border="0" alt="endwar-beta-02" src="http://lh5.ggpht.com/_ozsnJVts-XM/TFi81hPy_QI/AAAAAAAAAZI/ojrIv1m_zsI/endwar-beta-02_thumb.jpg?imgmax=800" width="244" height="139"></a><font size="3"> Source: </font><a href="http://www.the-chiz.com/"><font size="3">http://www.the-chiz.com/</font></a></p> <p><font size="3">Some games go as far as using full natural language processing:</font></p> <div style="padding-bottom: 0px; padding-left: 0px; width: 425px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:5737277B-5D6D-4f48-ABFC-DD9C333F4C5D:173fb519-c77c-43fc-85e9-aad459603e0c" class="wlWriterEditableSmartContent"><div id="490b6288-23f6-438d-bb39-8ffdbeac61a1" style="margin: 0px; padding: 0px; display: inline;"><div><a href="http://www.youtube.com/watch?v=GmuLV9eMTkg" target="_new"><img src="http://lh4.ggpht.com/_ozsnJVts-XM/TFjDwPcXiGI/AAAAAAAAAe4/3tYESrF4GKA/video2aaac72c667e%5B8%5D.jpg?imgmax=800" style="border-style: none" galleryimg="no" onload="var downlevelDiv = document.getElementById('490b6288-23f6-438d-bb39-8ffdbeac61a1'); downlevelDiv.innerHTML = "<div><object width=\"425\" height=\"355\"><param name=\"movie\" value=\"http://www.youtube.com/v/GmuLV9eMTkg&hl=en\"><\/param><embed src=\"http://www.youtube.com/v/GmuLV9eMTkg&hl=en\" type=\"application/x-shockwave-flash\" width=\"425\" height=\"355\"><\/embed><\/object><\/div>";" alt=""></a></div></div></div> <p align="justify"><font size="3"><strong><u>Virtual Humans on the Web:</u></strong></font></p> <p align="justify"><font size="3">There are a lot of very superficial virtual humans on the web. This is perhaps one of the main reasons that they have failed so far to become a mainstream interface. What virtual humans should be, is about the whole thing: emotion modelling, cognition speech, dialogue, domain strategies and knowledge, gestures etc. Avatars like Anna of IKEA are mere drawings with very limited dialogue abilities, and are simply there to create a more interesting FAQ (Frequently Asked Question) Section. There is still someway to go before we will see full-scaled avatars on the web, but we will get there.</font></p> <p align="justify"> </p> <p align="justify"><a href="http://lh4.ggpht.com/_ozsnJVts-XM/TFi82jO1EuI/AAAAAAAAAZQ/820uRM0IHl8/s1600-h/2%5B2%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="2" border="0" alt="2" src="http://lh5.ggpht.com/_ozsnJVts-XM/TFi826SpW9I/AAAAAAAAAZU/kdqt6W_JL_w/2_thumb.jpg?imgmax=800" width="244" height="143"></a> Source: <a href="http://www.ikeafans.com/forums/swaps-exchanges/1178-malm-bed-help.html">http://www.ikeafans.com/forums/swaps-exchanges/1178-malm-bed-help.html</a></p> <p align="justify"><font size="3"></font> </p> <p align="justify"><font size="3"><strong></strong></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com2tag:blogger.com,1999:blog-1676610130059575298.post-47779550140967330362010-07-30T18:28:00.000-07:002010-08-03T18:28:48.855-07:00E-Learning Prototype<p align="justify"><font size="3">Below is the prototype of a e-learning system that I was asked to do by a company. As I can not draw, I decided to use Microsoft Word to communicate my ideas. There should be a good storyboarding tool out there that could help me to streamline the process. </font></p> <p align="justify"><font size="3">The design below is based on <strong>existing and proven</strong> technologies that can be easily integrated into existing e-learning platforms. </font><a href="http://www.codebaby.com/showcase/"><font size="3">Codebaby</font></a><font size="3">, a company in Canada is already using avatars (such as those shown in my design) [1] in e-learning very successfully for several years. The picture in the last screen of the design is a virtual </font><a href="http://horizonproject.wikispaces.com/Virtual+Worlds"><font size="3">classroom</font></a><font size="3"> [2] created in the popular <a href="http://secondlife.com/">Second Life</a> platform.</font></p> <div><object style="width:600px;height:425px" ><param name="movie" value="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf?mode=embed&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100801014041-bd2d04a7170943c381fec4c652f0c273&docName=e-learning&username=giannis_&loadingInfoText=E-Learning%20prototype&et=1280627312562&er=98" /><param name="allowfullscreen" value="true" /><param name="menu" value="false" /><embed src="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf" type="application/x-shockwave-flash" allowfullscreen="true" menu="false" style="width:600px;height:425px" flashvars="mode=embed&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100801014041-bd2d04a7170943c381fec4c652f0c273&docName=e-learning&username=giannis_&loadingInfoText=E-Learning%20prototype&et=1280627312562&er=98" /></object></div> <div align="justify"> </div> <div align="justify"><font size="3">Compare my solution with a “conventional” e-learning platform shown below. Although I do include several GUI (Graphical User Interface) elements in my work, it is obvious that : a) my interface is <strong>minimalistic</strong> with fewer elements on the screen. b) <strong>accessibility</strong> is greater, as instead of clicking on multiple links in order to accomplish tasks, you can simply “ask” the system using the most natural method you know - “natural language”. The benefits of avatar-assisted e-learning will become <strong>evident</strong> when the web progresses from its current form to Web 2.0 and ultimately to Web 3.0. For now, such solutions should at least be offered as an augmentation to “conventional” GUI-based interfaces. All companies want something more, for example something that would add easier access to module contents and the “WOW” factor to their products. They just don’t know what it is until you show it to them. </font></div> <div align="justify"><font size="3"></font> </div> <div align="justify"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TFi0dD_EBII/AAAAAAAAAY4/mDn5zQSGCAg/s1600-h/1%5B2%5D%5B3%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="1[2]" border="0" alt="1[2]" src="http://lh3.ggpht.com/_ozsnJVts-XM/TFi0dukUKiI/AAAAAAAAAY8/bbHVP_2UJro/1%5B2%5D_thumb.jpg?imgmax=800" width="244" height="110"></a></div> <div align="justify"><font size="3">Source: <a href="http://www.accessplanit.com/accessplan_lms_screen_shots">http://www.accessplanit.com/accessplan_lms_screen_shots</a></font></div> <div align="justify"><font size="3"></font> </div> <div align="justify"><font size="3">Although the proposed design is based on <strong>mature</strong> and <strong>well-tested technologies,</strong> I can understand if someone wants the purely GUI solutions. In fact, I would be more than happy to assist them. I have been working with GUI interfaces for several years, long before I developed an interest for avatar technologies. I developed my first e-learning tool back in 1998 (12 years ago). It was an educational CD-ROM about the robotic telescope platform of Bradford University. </font></div> <div align="justify"> </div> <div align="justify"><a href="http://public.bay.livefilestore.com/y1pT4HihOmORFgF2Doy0BfECGXEF61618mP3dTZaaCuOuLHkMrZxsSejKF2MYJDdcgYZr2HrmYbzBs1ZSttzYFk-A/1.gif?psid=1"><img style="display: block; float: none; margin-left: auto; margin-right: auto" alt="" src="http://public.bay.livefilestore.com/y1pT4HihOmORFgF2Doy0BfECGSR6nu0bV4dul5sTBpXGBGs0fPL8O8NaIJ5GSbhuaOpc1NSAZrThfa2p20wmPjP4g/1.gif?psid=1" width="250" height="184"></a></div> <p><font size="3">[1] </font><a href="http://www.codebaby.com/showcase/"><font size="3"><a href="http://www.codebaby.com/showcase/">http://www.codebaby.com/showcase/</font><font size="3"></font></a></p> <p><font size="3">[2] </font><a href="http://horizonproject.wikispaces.com/Virtual+Worlds"><font size="3">http://horizonproject.wikispaces.com/Virtual+Worlds</font></a></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-34897847336644141792010-07-30T10:13:00.001-07:002010-08-05T06:57:32.453-07:00Heuristics vs. User Research<p align="justify"><font size="3">People keep asking me about the </font><a href="http://www.w3.org/TR/WCAG/"><font size="3">W3C</font></a><font size="3"> accessibility guidelines – a set of heuristics that should aid designers towards more accessible web sites. Of course these are not the only guidelines out there, <a href="http://www.bbc.co.uk/guidelines/futuremedia/accessibility/">BBC</a> has it own accessibility guidelines and there are several for web usability as well. Although I am familiar with the W3C guidelines, </font><font size="3">I didn’t use them in my MGUIDE work because I didn’t find them relevant. The reason is that the W3C guidelines are written specifically for web content and not for multimodal content. The research is the area of virtual humans provide more relevant heuristics, but there is still room for massive additions and improvements. </font><font size="3">Instead of heuristic evaluation, I decided to built my own theoretical framework to guide my research efforts. The framework is based on the relevant literature in the area and on well documented theories of human cognition. It provides all the necessary tools for iterative user testing and design refinement. </font></p> <p align="justify"><font size="3">There is no doubt that relying on user testing is costly and lengthy. This becomes even more difficult, when you have to deal with large groups of people as I did in MGUIDE. However the cost and time can be minimised with the use of proper tools. For example, the <a href="http://www.prendingerlab.net/globallab/about/">global lab project</a> has created a virtual lab (on the popular Second Life platform) in which projects are accessible to anybody, anytime, and from anywhere. New research methods like eye tracking and emotion recognition, can reveal user insights with a relatively small group of people and with minimal effort. </font><font size="3">Soon enough perhaps, tools will include routines that calculate deep statistics with minimal intervention. U</font><font size="3">ser testing has definitely some way to go before it becomes mainstream, but I am sure we will get there.</font></p> <p align="justify"><font size="3">Until then, Inspection methods (e.g., cognitive walkthroughs, expert surveys of the design, heuristic evaluations etc) are used to replace user testing. In such a process, some high level requirements are usually prototyped, and then judged by the expert against some established guidelines. A major problem with this approach though, is that there are over 1,000 documented design guidelines [1]. How do you choose which is one is proper given the specific context? It is my understanding that each institute/professional uses a set of best-practice guidelines, adapted from the relevant literature and from years of experience. However, even if these guidelines have worked in the past it doesn’t mean they will work again. Technology is progressing extremely fast, and people become more knowledgeable, and more accustomed to technology every single second. Therefore, even when inspection methods are used some form of user testing is necessary. A</font><font size="3"> focus group for example, with a couple of users can provide enough user-insights to amend a design as necessary.</font></p> <p align="justify"><font size="3">[1]<a href="http://www.nngroup.com/events/tutorials/usability.html">http://www.nngroup.com/events/tutorials/usability.html</a></font> </p> <p align="justify"><font size="3"> </font></p> <p align="justify"><font size="3"></font> </p> <p align="justify"><font size="3"></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-17547309651824503722010-07-28T17:25:00.001-07:002010-08-05T06:58:11.746-07:00Emotion Recognition for Accessibility Research<p align="justify"><font size="3">There are a number of quantitative techniques that can be used in the user research of avatar-based interfaces. Apart from the “usual” techniques for gathering subjective impressions (through questionnaires, tests, etc) and performance data, I also considered a more objective technique based on emotion recognition. In particular, I thought of evaluating the accessibility of the content presented by my systems through the use of emotion expression recognition. The main hypothesis is that the perceived accessibility of the systems' content is evident in the user's emotional expressions. </font></p> <p align="justify"><font size="3">If you think about it for a while, the human face is the strongest indicator of our cognitive state and hence, how we perceive a stimuli (information content, image, etc). Emotion measures (both quantitative and qualitative) can provide data that can augment any traditional technique for accessibility evaluation (e.g., questionnaires, retention tests, etc). For example, with careful logging you can see which part of your content is more confusing, which part requires the users to think more intensively, etc. In addition to the qualitative data, the numeric intensities can be used for some very interesting statistical comparisons. </font><font size="3">Manual coding of the video streams is no longer necessary, as there are a number of tools that allow automating analysis of face expressions. To my knowledge the following tools are currently fully functional:</font></p> <p align="justify"><font size="3">1) <a href="http://www.visual-recognition.nl/eMotion.html">Visual Recognition</a> </font></p> <p align="justify"><a href="http://lh5.ggpht.com/_ozsnJVts-XM/TFDKb1DJKjI/AAAAAAAAAYI/NWNUkmDUaHk/s1600-h/ScreenShot12.png"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="ScreenShot (1)" border="0" alt="ScreenShot (1)" src="http://lh5.ggpht.com/_ozsnJVts-XM/TFDKcTM38iI/AAAAAAAAAYM/mR-AFWXLctY/ScreenShot1_thumb.png?imgmax=800" width="244" height="190"></a> </p> <p align="justify">2) <a href="http://www.iis.fraunhofer.de/EN/bf/bv/kognitiv/biom/dd.jsp">SHORE</a></p> <p align="justify"><img style="display: block; float: none; margin-left: auto; margin-right: auto" alt="Mimikanalyse" src="http://www.iis.fraunhofer.de/fhg/Images/gesichtsfeinanalyse_web_tcm278-138890.jpg"></p> <p align="justify"><font size="3">The idea is fully developed, and I am planning the release of the paper very soon. Finally, If we combine this technique with eye-tracking we can reveal even more user-insights about avatar-based interfaces. We could try for instance to identify, what aspect of the interface make the user to have the particular face expression (positive or negative). For example, one of the participants in my experiments mentioned that she couldn’t pay attention to the information provided by the system, because she was looking at the guide’s hair waving. To such a stimuli humans usually have a calm expression. This comment is just an indication of the user-insights that can be revealed, if these techniques are successfully combined. </font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-77637518233838996362010-07-27T15:31:00.001-07:002010-08-05T06:58:52.687-07:00Accessibility/Universal Access<p align="justify"><font size="3">I recently found a good resource [1] on accessibility from a company called </font><a href="http://www.cimex.com/digital/design-for-universal-access"><font size="3">Cimex</font></a><font size="3"> that says what most designers and UX specialists fail to see – when you design for accessibility you do not cater only for less able users. You are making sure that your content is open and accessible to a variety of people and machines using whatever browser or method they choose. </font></p> <p align="justify"><font size="3">Now, caring for a variety of people of different physical, cognitive emotional and language background and the methods they choose to use you end up with Universal Access. </font></p> <p align="justify"><font size="3">Using traditional interfaces is difficult to achieve the goals of Universal Access. Virtual humans as interfaces hold a high potential of achieving the goals of UA as the modalities (e.g., natural language processing, gestures, facial expressions and others) used in such interfaces are the ones our brains have been trained to understand over thousand of years. Virtual humans can speak several languages with a minimal effort (see the </font><a href="http://www.charamel.com/en/showroom/livingkiosk.html"><font size="3">Charamel showroom</font></a><font size="3">). Their body and face language can be adjusted easily to highlight the importance of a message. Sign-language can be used to communicate with less-able users (no other interface can currently accomplish that). Accurate simulation of interpersonal scenarios (e.g., a sales scenario) can guarantee that your message gets across as effectively as it would if a real person would speak it. </font></p> <p align="justify"><font size="3">In my work I did go as far as Universal accessibility by comparing the effects of virtual human interfaces on the cognitive accessibility of information under simulated mobile conditions, using groups of users with different cultural and language background. In order to make the information easier to access, I used a variety of methods found in the VH interfaces (e.g., natural language processing, gestures, interpersonal scenario simulation and others). By making the main functions of your system easier to access you ultimately make the interface easier to use and hence, it was natural to investigate some usability aspects as well (e.g., ease of use, effectiveness, efficiency, user satisfaction, etc). These are all aspects of the user experience (UX), i.e., the quality of experience a person has when interacting with a product. I can not release any more information at this stage, as the necessary publications have not yet been made.</font></p> <p align="justify"><font size="3">In the future I believe that the existing technologies will merge into two mainstream platforms: a) Robotic assistants from the one and b) software assistants/virtual human interfaces from the other. Accessing the services these systems will offer will be as easy as our daily interactions with other people. The barriers that exist today (cognitive, physical, etc) will become a “thing” of the past. </font></p> <p align="justify"><font size="3"></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-63130027638109893172010-07-26T15:16:00.001-07:002010-08-05T07:03:33.684-07:00MGUIDE Development Process<p align="justify"><font size="3">I thought it would be a good idea to try to explain the methodologies followed in the development of the MGUIDE prototypes. Having a focus mainly on the research outcomes, the development methodology followed was of little concern to the involved stakeholders. Trying to create interpersonal simulations like the ones found in real-life is a process mostly compatible with the a Scrum development methodology (shown below). I am planning to create a paper on the topic, and hence I will not say much in this post.</font></p> <p align="center"> <a href="http://lh5.ggpht.com/_ozsnJVts-XM/TE4JUkcw2-I/AAAAAAAAAYA/VSzBo41HqKI/s1600-h/800px-Scrum_process_svg%5B5%5D.png"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="800px-Scrum_process_svg" border="0" alt="800px-Scrum_process_svg" src="http://lh6.ggpht.com/_ozsnJVts-XM/TE4JVD4XmZI/AAAAAAAAAYE/ZNy3EX3bFBI/800px-Scrum_process_svg_thumb%5B1%5D.png?imgmax=800" width="244" height="124"></a> </p> <p align="left">Source: <a href="http://en.wikipedia.org/wiki/File:Scrum_process.svg">Wikipedia</a></p> <p align="justify"><font size="3">Gathering the requirements of the users can be done using a variety of ways. I followed a combine literature-user evaluation approach. One of my earliest prototypes was developed using guidelines found in the literature. The prototype was then evaluated with actual users and a set of new requirements was developed. These requirements are what the SCRUM refers to as the “product backlog”. Each spring (usually in my case 1-3 months) a set of the requirements were developed and tested, and then were replaced by a new set of requirements. Doing simulations of interpersonal scenarios gives you the freedom to augment the product backlog with new requirements quite easily. Using methods of research like direct observation and note taking, you can take notes on the interactions found in the scenarios that you want to simulate. My scenario was a guide agent and hence, I went to a number of tours where I made a number of interesting observations. Most of my findings were actually developed in the MGUIDE prototypes, but there are others that still remain in the “product backlog”. Of course these requirements and the work that was done in the MGUIDE is enough to inform Artificial intelligence models of behaviour in order to create completely automated systems.</font></p> <p align="justify"><font size="3">This iterative process was then repeated prior the actual user research stage, where the full set-up of the MGUIDE evaluation stage was tested. I used a small group of people that tried to find bugs in the software, problems with the data gathering tools and others. The problems were normally corrected on site and the process was repeated again. Once I ensured that all my instruments were free of problems, the official evaluation stage of the prototypes started.</font></p> <p align="justify"><font size="3">Closing this post, I must highlight the need for future research in gathering data about different situations where interpersonal scenarios occur. In reality different situations produce different reactions in people and this should be researched further. Only through detailed empirical experimentation we can ensure that future avatar-based systems will guarantee superior user experiences. </font></p> <p align="justify"><font size="3"> </font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-8476517529782381442010-07-23T08:40:00.001-07:002010-08-05T15:07:04.787-07:00Speech Recognition<p><font size="3">In order to successfully simulate an interpersonal scenario with a virtual human, you need speech recognition (in real-life we speak to each other and not click on buttons or use text). For this reason, I have been following closely the evaluation of the speech recognition industry for some time now. </font></p> <p align="justify"><font size="3">During the MGUIDE project I successfully integrated speech recognition into one of my prototypes.</font> <font size="3">I used Microsoft Speech Recognition Engine 6.1 (SAPI 5.1) with dictation grammars which I developed using the </font><a href="http://www.chant.net/Products/GrammarKit/Default.aspx"><font size="3">Chant GrammarKit</font></a><font size="3">, in pure XML. The grammars look like this: </font></p> <blockquote> <p><RULE name="Q1" TOPLEVEL="ACTIVE"><br><l><br><P><RULEREF NAME="want_phrases"/>to begin</P><br><P><RULEREF NAME="want_phrases"/>to start</P><br><P><RULEREF NAME="want_phrases"/>to start immediately</P><br></l><br><opt>the tour </opt><br><opt>the tour ?then</opt><br></RULE></p></blockquote> <p align="justify"><font size="3">I also voice-enabled the control of the interface of my system, so if you would say “Pause” the virtual guide would pause its presentation. </font><font size="3">I briefly tested both modes with one participant in the lab. In the dictation mode, with just a couple of minutes of training Microsoft’s engine performed with 100% accuracy within the constrains of the grammar. For completely unknown input, the engine performed with less than 40% accuracy. In CnC mode, the engine worked with 100% accuracy without any training. Of course, </font><font size="3">SAPI 5.4 in Windows 7 offer much better recognition rates in both dictation and CnC modes. I haven’t tried SAPI 5.4 but is within my plans for the future.</font><font size="3">I think that true speaker-independent (i.e., without training) recognition in indoor environments, is only 5 years away, at least for the English language. </font> <p align="justify"><font size="3">In mobile environments, <a href="http://siri.com/">Siri</a> appears to be the only solution out there that realises the idea of a virtual assistant on the go using speech recognition. Siri works uses dynamic grammar recognition, similar to my approach. If you say something within the constrains of the grammar the accuracy of recognition reaches 100%. However, as in the case of my prototype, if you say something outside the grammar files the recognition results can be really funny.</font> <p align="justify"><font size="3"><strong>Statement to Siri:</strong> Tell my husband I’ll be late</font> <p align="justify"><font size="3"><strong>Reply:</strong> Tell my Husband Ovulate (he he he)</font> <p align="justify"><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TEm31VvsR8I/AAAAAAAAAXw/upS-aivrszQ/s1600-h/ASR%5B2%5D.jpg"><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="ASR" border="0" alt="ASR" src="http://lh3.ggpht.com/_ozsnJVts-XM/TEm31-WOnJI/AAAAAAAAAX0/-0oofV_w8Y4/ASR_thumb.jpg?imgmax=800" width="164" height="244"></a> <p align="justify"><font size="3">Source: <a href="http://siri.com/v2/assets/Web3.0Jan2010.pdf">http://siri.com/v2/assets/Web3.0Jan2010.pdf</a></font> <p align="justify"><font size="3"><strong>Terminology:</strong></font> <p align="justify"><font size="3"><strong>Dictation Speech Recognition:</strong> Refers to the type of speech recognition where the computer tries to translate what you say into text</font> <p align="justify"><font size="3"><strong>Command and Control mode (CnC):</strong> This type of speech recognition is used to control applications</font> <p align="justify"><font size="3"></font> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-50046781452771504312010-07-23T03:22:00.001-07:002010-08-05T15:51:05.546-07:00MGUIDE Project Funding<p align="justify"><font size="3">As people keep asking me about the funding of the MGUIDE project, I thought to post this in an attempt to clarify the situation further. The MGUIDE was a large and very sophisticated project and money come from a variety of resources. The project started in 2007 and until 2008, Middlesex University was the main funding body and my last official employment institute. The package from Middlesex University covered my project expenses for that year and required me to perform maximum 15 hours of teaching/week. Two other Universities and six companies also provided support in the form of know-how, and funding for tools and hardware. From 2008 until June 2010, I was able to secure funding from an angel investor and thankfully the continued support of the companies and universities. The idea with the MGUIDE was and still remains to develop a commercial product out of it. However, because of the bad economic climate, my investor decided not to proceed any further. I still hope that this work will appeal to a company and I will be able to see MGUIDE as an application/system for Ipad or any other tablet-based computer system.</font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-49807551163782847352010-07-21T16:29:00.001-07:002010-08-04T03:24:47.619-07:00Project Management - E-Learning Projects<p align="justify"><font size="3">This post is not related to the MGUIDE project, but to the work I did at Middlesex University. Most of the modules I taught there were project based. Usually I had to guide several groups of students (20-30 students) into the design and development of projects. A particular project had to do with the design and implementation of e-learning games for autistic children. Each of my students was given a case study describing the requirements of the particular autistic children (as these were captured from the teachers of the children, like for instance, that the children needs help in understanding emotion expressions), and had to produce a game under my guidance. The game was hen sent to the relevant school for full-scale evaluation. Each semester I</font><font size="3"> usually ended up marking 100-200 games with at least 80% of them being top class. Below is an example of the projects produced under my guidance. All material is copyrighted by Middlesex University so please ask before you copy anything: </font></p> <p align="justify"><font size="3">All multimedia elements (including designs) have been produced by my students using Adobe Photoshop. The tools needed for the game development along with best practice techniques, were discussed in detail in the class by myself.</font></p>Copyright by Middlesex University – Please do not copy <div style="padding-bottom: 0px; padding-left: 0px; width: 400px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:66721397-FF69-4ca6-AEC4-17E6B3208830:13ee249c-b746-484d-888b-35e8cd7fd255" class="wlWriterEditableSmartContent"><a style="border:0px" href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7083&type=5"><img style="border:0px" alt="View E-Learning" src="http://lh4.ggpht.com/_ozsnJVts-XM/TFk_7mqUGKI/AAAAAAAAAac/_Zih27AvETY/InlineRepresentation550ddfa1-5810-436f-83e8-c74f6e9054ba%5B3%5D.jpg?imgmax=800" /></a><div style="width:400px;text-align:right;" ><a href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7083&type=5">View Full Album</a></div></div> <p align="justify"><font size="3">Each game was evaluated in the class (by myself and the students). The games were then sent to the schools for formal evaluation by the children and their teachers. Below is a sample of the heuristic evaluation that was performed in the class:</font> </p> <blockquote> <table border="1" cellspacing="0" cellpadding="0"> <tbody> <tr> <td valign="top" width="181"> <p><b>Evaluation comments</b></p></td> <td valign="top" width="174"> <p><b>Negative Comments</b></p></td> <td valign="top" width="216"> <p><b>Positive Comments</b></p></td></tr> <tr> <td valign="top" width="181"> <p><b>Background</b> <p>First scene using a suitable background and clearly stating your own title for the topic and clear instructions</p></td> <td valign="top" width="174"> </td> <td valign="top" width="216"> </td></tr> <tr> <td valign="top" width="181"> <p><b>Text</b> <p>A variety and clear use of Text (Spelling and Grammar?) Too small – Too large – not clear- inappropriate words used </p></td> <td valign="top" width="174"> </td> <td valign="top" width="216"> </td></tr></tbody></table></blockquote> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-85062946063020517512010-07-19T16:50:00.001-07:002010-08-12T04:47:31.863-07:00Art Assets<p align="justify"><font size="3">Below are samples from the art work I completed in the MGUIDE project. Although I have several years of experience in designing using Adobe Photoshop, I do not consider myself a designer. Design is interesting, but I prefer programming and user-based research. However, if a project requires me to produce art assets I am perfectly capable of accomplishing that as well. </font></p> <div style="padding-bottom: 0px; padding-left: 0px; width: 416px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:66721397-FF69-4ca6-AEC4-17E6B3208830:a8bcebaf-c1fe-41f5-a990-dc1b74b4bfb6" class="wlWriterEditableSmartContent"><a style="border:0px" href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7055&type=5"><img style="border:0px" alt="View Art Assets" src="http://lh5.ggpht.com/_ozsnJVts-XM/TETkwy9FYlI/AAAAAAAAAUw/zmLKtH5zEKk/InlineRepresentation8d0f09a7-c84e-4b68-88c7-6e993fa14a18%5B2%5D.jpg?imgmax=800" /></a><div style="width:400px;text-align:right;" ><a href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7055&type=5">View Full Album</a></div></div> <p> </p> <div style="padding-bottom: 0px; padding-left: 0px; width: 416px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:66721397-FF69-4ca6-AEC4-17E6B3208830:d80c54fb-37d2-4571-be8c-1ea30e4bddc6" class="wlWriterEditableSmartContent"><a style="border:0px" href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7071&type=5"><img style="border:0px" alt="View Buttons" src="http://lh5.ggpht.com/_ozsnJVts-XM/TGPfUpiP95I/AAAAAAAAAdg/PF-i0nH-JTU/InlineRepresentation82ddc23c-0fc1-4870-be18-c1a5956514f8.jpg?imgmax=800" /></a><div style="width:400px;text-align:right;" ><a href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7071&type=5">View Full Album</a></div></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-1018247992292006392010-07-02T09:45:00.000-07:002010-08-05T06:59:35.114-07:00Experiment 4 setup<p align="justify"><span style="font-size: small"><font size="3">Due to demand, I decided to provide some information on the set-up of my experimental work during the evaluation stage of MGUIDE. The information below is the briefing participants had to read for experiment 4. Please note, that the main technique for data collection in experiment 4, is<strong> think aloud protocol.</strong> I conducted the testing with two user groups of 6 participants.</font></span></p> <blockquote> <p align="justify"><font size="2">The purpose of this experiment is to investigate the possible effects of two mobile systems for path finding of variable complexity, on your ability to find your way in the castle. You will have to use the systems to navigate along two different routes visiting a number of landmarks in turn (10 to maximum 18), using the system A on one, and the system B on the other. The total duration of the experiment doesn’t exceed 20 minutes per route. For the purpose of the experiment I have created two video applications representing in detail each route of the castle. At each video-clip you will hear the question “What would you do at the particular point, if you were in the castle?” You must answer the question based on the visual (i.e., gestures and landmarks) and/or audio instructions delivered by the system. <br></p></font> <p align="justify"><font size="2">For example: </font></p> <p align="justify"><font size="2">Given this instruction:From where you are, if you look on your right, you will see two chimneys. Opposite the chimneys there is a path that leads to another square. Please follow this path until you will come across a house with a black front-door!</i> </font></p><font size="2"> <p><br>And this clip: <br></font><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TDyYkBMO7tI/AAAAAAAAARw/F8CXovnjR5I/s1600-h/clip_image002%5B3%5D.jpg"><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="clip_image002" border="0" alt="clip_image002" src="http://lh4.ggpht.com/_ozsnJVts-XM/TDyYku5xX1I/AAAAAAAAAR0/EqXhHSFTPJQ/clip_image002_thumb.jpg?imgmax=800" width="244" height="145"></a> </p></blockquote> <blockquote> <p></p> <p align="justify"><font size="2">You will have to answer: “I will follow the path on the right of the tree until I see the house with the black front-door”. The next video will show the result of this action (i.e., that you have moved towards the house with the black door), and will pose you a new navigational challenge.</font></p></blockquote> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-69991389414622026552010-06-30T18:08:00.000-07:002010-08-05T07:00:06.981-07:00User Research – General Setup<p align="justify"><font size="3">User research in the MGUIDE project can be divided into the following stages: </font></p> <ul> <li> <div align="justify"><font size="3">Requirements Gathering and specification: A prototype system was constructed based on recommendations from the literature on animated agents. The prototype was evaluated in the actual <a href="http://www.monemvasia.com/">castle of Monemvasia</a> with a number of participants. You can find more details <a href="http://virtual-guide-systems.blogspot.com/search/label/Early_Years">here</a>. The lessons learned were presented during the M.Phil presentation and influenced the design of the final five (5) prototypes.</font></div></li></ul> <p align="justify"><font size="3"></font> </p> <ul> <li> <div align="justify"><font size="3">Design and Prototypes: Based on requirements gathered from the pilot in the castle of Monemvasia, five(5) prototypes were developed. A number of novelties were achieved during the developed of the prototypes (e.g., an algorithm for natural language understanding, the design of Talos – a toolkit for system prototyping and research, and others). Although the initial idea was to continue the evaluation in the castle, due to lack of resources and time it was decided to simulate the conditions in the lab. </font><font size="3">There is an on-going debate on whether mobile applications should be tested in labs or fields. For instance, in 2009 70% of the developed systems were evaluated under lab conditions using a variety of techniques. </font></div></li></ul> <p align="justify"><font size="3"></font> </p> <ul> <li> <div align="justify"><font size="3">Evaluation: The evaluation carried out in Greece and the UK, followed the same setup. I used detailed panoramic photography and high definition video-clips to represent in high detailed all locations and attractions of the castle. The lab was a simple room in which each user participated individually. In general the approach was successful, as participants could follow the same routes and watch the same presentations about the locations of the castle, from the comfort of their chair.</font></div><font size="3"></font></li></ul> <p align="justify"><font size="3">Examples of the panoramas used in the evaluations is shown below: </font></p> <p align="justify"><font size="3"></font> </p> <div style="padding-bottom: 0px; padding-left: 0px; width: 416px; padding-right: 0px; display: block; float: none; margin-left: auto; margin-right: auto; padding-top: 0px" id="scid:66721397-FF69-4ca6-AEC4-17E6B3208830:d09a5840-3066-46fc-b5d7-e7390be25750" class="wlWriterEditableSmartContent"><a style="border:0px" href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7052&type=5"><img style="border:0px" alt="View Panoramas" src="http://lh4.ggpht.com/_ozsnJVts-XM/TEOnSuwK4BI/AAAAAAAAAag/CBtVbxi9zMM/InlineRepresentationa938681c-2815-4f2b-a3b2-46689142ef05.jpg?imgmax=800" /></a><div style="width:400px;text-align:right;" ><a href="http://cid-ae43d0b7ceb7edf9.skydrive.live.com/redir.aspx?page=browse&resid=AE43D0B7CEB7EDF9!7052&type=5">View Full Album</a></div></div> <p></p> <p align="justify"> </p> <div align="justify"><font size="3"></font> </div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-31960423367968576172010-06-30T16:46:00.000-07:002010-08-17T13:29:49.668-07:00User Research Stage<div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3">I recently completed the evaluation stage of all five (5) prototypes of the MGUIDE project. The purpose of the studies was to provide accessibility and usability insights into the design process of animated agents for mobile applications, as well as, to support some understanding of the psychology of the users of such systems. One hundred (100) real-world users participated. My role included the following:</font></span></span></div> <div align="justify"><br><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> <font size="3"> Design and apply various </font><a name="OLE_LINK11"><font size="3">user</font></a><a name="OLE_LINK6"></a><a name="OLE_LINK5"><font size="3">-centred design </font></a><a name="OLE_LINK4"></a><font size="3">methods (UCD) for research including: structured questionnaires, retention tests, interviews, think aloud protocols, observations, performance measures, scenario-based usability testing, to name but a few. </font></div> <div align="justify"><font size="3"></font> </div> <div align="justify"><font size="3">Note: For a complete list of the research techniques I excel at, see <a href="http://virtual-guide-systems.blogspot.com/2010/08/transferable-research-skills.html">here</a> (part A) and <a href="http://virtual-guide-systems.blogspot.com/2010/08/transferable-research-skills.html">here</a>(part B)</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> D<font size="3">esign, organize and manage five user research studies (with 100 real – world users) across two countries (Greece and the UK)</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> Continuously<font size="3"> refine of the prototypes based on feedback collected by the users</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> R<font size="3">egularly update all project sponsors and stakeholders of the progress made</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> S<font size="3">tatistically analyze the quantitative data using various statistical tests</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> R<font size="3">apidly analyze the qualitative data</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> D<font size="3">evelop recommendation/requirements to impact the design of animated agents for mobile applications </font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"> P<font size="3">roduce documentation and presentation of findings</font></div> <div align="justify"> </div> <div align="justify"><img align="middle" src="http://www.dotnetscraps.com/samples/bullets/001.gif"><font size="3">Write papers for publication in relevant conference proceedings and academic journals.</font></div> <div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3"></font></span></span></div> <div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3"></font></span></span> </div> <div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3">As no publications have been made yet, I can not provide any further details on my experimental work. Once the publication process is completed I will post further details on this very important stage of my work.</font></span></span></div> <div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3"></font></span></span> </div> <div align="justify"><font size="3"></font></div> <div align="justify"><span style="font-size: small"><span style="font-size: small"><font size="3">I want to thank all the students/family/friends/other people that honoured my work with their presence. Their contribution is valuable. </font></span></span></div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-46845692880871124832010-06-09T12:45:00.000-07:002010-08-17T18:13:30.925-07:00Complex System Architecture 2<p align="justify"><font size="3">Below is the final architecture of the Talos toolkit - my authoring toolkit for rapid prototyping of Virtual guide systems and research. The design is complete with a number of modules that need detail explanation. Some of these ideas were implemented in MGUIDE, but implementing the full toolkit is a task best suited for a team. </font></p> <div align="center"><object style="width:420px;height:607px" ><param name="movie" value="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf?mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711180139-207ea277bc924eae80f7cc638cfce74f&docName=talos_toolkit&username=giannis_&loadingInfoText=Talos_toolkit&et=1278873556573&er=29" /><param name="allowfullscreen" value="true" /><param name="menu" value="false" /><embed src="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf" type="application/x-shockwave-flash" allowfullscreen="true" menu="false" style="width:420px;height:607px" flashvars="mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711180139-207ea277bc924eae80f7cc638cfce74f&docName=talos_toolkit&username=giannis_&loadingInfoText=Talos_toolkit&et=1278873556573&er=29" /></object></div> <div align="center"> </div> <div align="justify"><font size="3">The only other toolkit in existence (free of charge for research) is the </font><font size="3"> <a href="http://vhtoolkit.ict.usc.edu/index.php/Main_Page">ICT Virtual Human Toolkit</a>. I have performed a cognitive walkthrough of the ICT toolkit, and fed the results into the design of Talos. I discuss my findings on a paper that will be published soon.</font></div> <div align="justify"><font size="3"></font> </div> <div align="justify"><font size="3"><strong>Please note:</strong> For obvious reasons I can not provide any documentation on the workflow of the Talos toolkit. The purpose of the diagram is ONLY to illustrate the complexity of my work.</font></div> <div align="justify"> </div> <div> </div> <div> </div> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-36555084094075437132010-06-06T18:02:00.000-07:002010-08-05T09:52:06.143-07:00Complex System Architecture<p align="justify"><font size="3">Below, you will find some sample from the complex system and subsystem architecture I did for the MGUIDE project. In particular, a) the general system overview of all prototypes b) is the workflow of the MGUIDE’s Natural Language Understanding (NLU) module. For obvious reasons I can not provide any documentation on the workflow of the two architectures below. The purpose of the diagrams are ONLY to illustrating the complexity of my work.</font></p> <div align="center"><object style="width:420px;height:566px" ><param name="movie" value="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf?mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711180131-288fc52e6da24415a4c6c11a101c7f0b&docName=systems-overview&username=giannis_&loadingInfoText=Systems%20Overview&et=1278871600398&er=42" /><param name="allowfullscreen" value="true" /><param name="menu" value="false" /><embed src="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf" type="application/x-shockwave-flash" allowfullscreen="true" menu="false" style="width:420px;height:566px" flashvars="mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711180131-288fc52e6da24415a4c6c11a101c7f0b&docName=systems-overview&username=giannis_&loadingInfoText=Systems%20Overview&et=1278871600398&er=42" /></object></div> <div> </div> <div align="center"> </div> <div align="center"><object style="width:420px;height:297px" ><param name="movie" value="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf?mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711212747-04167b56e3774de18c4e8acb04ec5d9d&docName=language_processing&username=giannis_&loadingInfoText=Language%20Processing&et=1278883742964&er=78" /><param name="allowfullscreen" value="true" /><param name="menu" value="false" /><embed src="http://static.issuu.com/webembed/viewers/style1/v1/IssuuViewer.swf" type="application/x-shockwave-flash" allowfullscreen="true" menu="false" style="width:420px;height:297px" flashvars="mode=embed&viewMode=presentation&layout=http%3A%2F%2Fskin.issuu.com%2Fv%2Flight%2Flayout.xml&showFlipBtn=true&documentId=100711212747-04167b56e3774de18c4e8acb04ec5d9d&docName=language_processing&username=giannis_&loadingInfoText=Language%20Processing&et=1278883742964&er=78" /></object></div> <div align="justify"> </div> <div align="justify"><font size="3">Apart from the semantic analyser (shown in the first diagram as experimental) everything else was fully prototyped and evaluated in the user-research stage of the project. </font></div> <p align="justify"><font size="3"></font></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0tag:blogger.com,1999:blog-1676610130059575298.post-23412638334542236672010-06-04T18:04:00.000-07:002010-08-09T11:20:55.928-07:00Rapid Prototyping, Which tool?<p align="justify"><font size="3">Below is an experiment to simulate one of the MGUIDE prototypes using Axure RP Pro. I wanted to see if Axure can be used in such applications, and how fast I could implement a working prototype.</font><font size="3">The dialogue window is clickable, as well as the buttons. </font><font size="3">All standard GUI (Graphical User Interface) elements of the MGUIDE interface can be quickly and easily implemented using this tool. </font><font size="3">The environment is drag drop, supporting adding interactions using visual elements (e.g., If then else statements with a few clicks). However, the absence of an internal scripting language means that I am limited by the number of interactional elements the program has already build in. I can’t build my own interactions, like for example the dialogue branching I easily implemented using VB.NET. Neither can I import my 3D avatar, for lets say a formal evaluation. In conclusion, Axure appears to be a tool to be used very early in the product cycle and mostly for low-fidelity prototypes. Later on, you you will need to build something complex in order to communicate your ideas more effectively. This is where I come in! </font><font size="3">I can built highly complex prototypes, in the same amount of time using VB.NET/Adobe Director with just a few lines of code. However, if a a stakeholder is happy with this tool, I can be happy as well. The tool can be used virtually by <strong>anybody</strong>, let alone myself.</font><font size="3"> </font></p> <p align="justify"><em><font size="3">System Preferences Wireframe</font></em></p> <p align="justify"><a href="http://lh4.ggpht.com/_ozsnJVts-XM/TGBG_2yQZRI/AAAAAAAAAck/xQzFcOOy7QE/s1600-h/Prototype_page0%5B14%5D.jpg"><img style="border-bottom: 0px; border-left: 0px; display: block; float: none; margin-left: auto; border-top: 0px; margin-right: auto; border-right: 0px" title="Prototype_page0" border="0" alt="Prototype_page0" src="http://lh5.ggpht.com/_ozsnJVts-XM/TF9c9iQzgEI/AAAAAAAAAco/fa_tIIUnTlY/Prototype_page0_thumb%5B13%5D.jpg?imgmax=800" width="473" height="263"></a> </p> <p align="justify"><em><font size="3">Main Screen Wireframe</font></em></p> <p align="justify"><a href="http://lh3.ggpht.com/_ozsnJVts-XM/TF9c-NKnnaI/AAAAAAAAAcs/JtbXE7_eDbM/s1600-h/Prototype_page1%5B1%5D.jpg"><img style="border-bottom: 0px; border-left: 0px; display: block; float: none; margin-left: auto; border-top: 0px; margin-right: auto; border-right: 0px" title="Prototype_page1" border="0" alt="Prototype_page1" src="http://lh5.ggpht.com/_ozsnJVts-XM/TF9c-1DOn4I/AAAAAAAAAcw/mWSMXyJMofA/Prototype_page1_thumb.jpg?imgmax=800" width="473" height="263"></a></p> giannishttp://www.blogger.com/profile/10498584304087721329noreply@blogger.com0