Showing posts with label mobileHCI2010. Show all posts
Showing posts with label mobileHCI2010. Show all posts

Friday, 10 September 2010

Our Paper on Mobile Product Review Systems at Mobile HCI 2010

Felix von Reischach investigated in his PhD mobile product review systems. The paper [1] he presented at mobile HCI 2010 in Lisbon compares different modalities for product reviews and recommendations. In particular we looked at the following modalities: discrete scale (stars), text, and video. In a study at the SAP retail lab we compared how easy it was for participants to create reviews in each of the modalities and how much they like creating these. Additionally we also compared which modalities are most liked by people in a buying situation and which type of review the trust. Interestingly a star rating scheme is most liked - for input and output.
Our general recommendation is to allow users to rate products on a scale (e.g. using stars) in different, potentially user defined categories. For a more detailed discussion see the paper [1].

The evening event of was at Palácio da Pena, Sintra - a castle close to Lisbon. The view and the food were magnificent - it felt like a real treat after the one hour walk up the steep hill.

[1] von Reischach, F., Dubach, E., Michahelles, F., and Schmidt, A. 2010. An evaluation of product review modalities for mobile phones. In Proceedings of the 12th international Conference on Human Computer interaction with Mobile Devices and Services (Lisbon, Portugal, September 07 - 10, 2010). MobileHCI '10. ACM, New York, NY, 199-208. DOI= http://doi.acm.org/10.1145/1851600.1851635

Abstract:
Research has shown that product reviews on the Internet not only support consumers when shopping, but also lead to increased sales for retailers. Recent approaches successfully use smart phones to directly relate products (e.g. via barcode or RFID) to corresponding reviews, making these available to consumers on the go. However, it is unknown what modality (star ratings/text/video) users consider useful for creating reviews and using reviews on their mobile phone, and how the preferred modalities are different from those on the Web. To shed light on this we conduct two experiments, one of them in a quasi-realistic shopping environment. The results indicate that, in contrast to the known approaches, stars and pre-structured text blocks should be implemented on mobile phones rather than long texts and videos. Users prefer less and rather well-aggregated product information while on the go. This accounts both for entering and, surprisingly, also for using product reviews.

Thursday, 9 September 2010

Tobii Mobile Eye-Tracker

After quite some time of announcements and paper presentation Tobii showed a beta version of their mobile eye tracker at Mobile HCI 2010. It is small and lightweight but requires a import phase after recording. It seems well suited to do classical studies (e.g. you have people doing some task and you analyse afterwards what they looked at) but of little use for interactive application (e.g. the system reacts in real-time to the users gaze). Technically the glasses record the video and gaze information on a memory card. After the recording this needs than to be "imported" on the PC. After the import the gaze is superimposed over the recorded video. Tobii also provides active markers that can be attached to objects and regions to ease post processing and analysis. The price tag in Europe is just below 20.000€. SMI has a similar product: IVIEW X HED.

Keynote by Josh Ulm at Mobile HCI 2010

Josh Ulm discussed how branding (and marketing in more general) has evolved and how this is now central to user experience and user interface design. He started out with showing how Nike changed marketing with the "Just Do It" campaign. He suggested that this was a transition from a product focus to a personal usage focus asking "who you become if you use the product".

Moving towards more resent trends he argued that the iPod made the interaction with the product the essential part of the branding. With examples such as eBay and Google he showed that interaction in combination with information presentation becomes that discriminating factor and the way these brands define themselves. Overall this suggests that the user interaction and user experience is the central part for making a brand to stand out.

Standing out is not sufficient however. The experience has to be ownable. Using zappos as an example he showed how such an experience needs to be consistent across all touch points with the user - especially if you are defining your brand by experience - in short a brand has to have a unique user experience that is associated with the brand only. To achieve this there are three ingredients:
  • Values - "values have to extent into every single touchpoint of the experience"
  • Differentiation "you need to stand out in the market place" - many companies do not innovate enough, you have to take risks to stand out because you have to stand out a lot to be different - most companies do only innovate a little
  • Integrity - internal consistency - the brand is about what customers really touch - what reaches the customers - can a customer recognize that this is your experience, detail matter
To be successful an experience needs to be ownable and good. You need both - one is not enough. As a concluding example he showed Vodafone 360 - (which very few people in the audience knew). My short assessment is that the brand failed on "Differentiation" - to me (and obviously I would expect Josh Ulm is not agreeing) it is too similar to other things in this space…

As a further example of a strong Differentiation he showed Jeff Fong's Metro UI for the new Windows mobile phone platform. If you have not seen the design check this out. It is very different from current Android and iPhone UIs. Everyone is really curious how it will do in the market…

Tuesday, 7 September 2010

Opening Keynote of Mobile HCI by Patrick Baudisch

Patrick Baudisch presented the opening Keynote. He gave an interesting overview of work on interaction with small screen devices. It is very apparent that currently the form factor of devices is nearly exclusively determined by the I/O components. Throughout the examples it became very explicit, that the usefulness of a device for certain tasks it determined by the options for interaction. E.g. by increasing the input precision or by reducing the fat finger problem new tasks (and eventually new applications) will become possible on mobile devices. Patrick showed an example where he asks his students, to design a UI for a device as small as a one Euro coin (e.g. digital jewelry) - I think I borrow this idea for our next course :-)

How showed many examples of his work - too many to write it in a blog entry - but well documented on his web page. It is a page worthwhile to check regularly as his group is outstandingly creative and productive.

Patrick made an insightful analogy between small screens and theater stage, arguing that the both have a similar problem: the lack space to show everything you would like to show due to a constraint space. The solutions in the theater can give a good inspiration for designing mobile UIs. He mentioned three typical solutions used in the theater:
  • Partially out of the frame - you show something only to a small part and the user's imagination will fill in for the rest. In the theater an example is a ocean liner where you only see a very small part but people imagine the rest. The halo visualization technique [1] is one example for this approach on a mobile device.
  • In and out points - people enter and leave the stage at given points providing an illusion of a comprehensible world around the stage (and in the UI case around the screen)
  • Direction of interaction to invisible targets - by talking/pointing/looking into a certain direction actors can create an impression of interacting with someone at a certain location - who is not really there and who's position is off the stage.
Technically I found the video he showed on using a depth cam extremely exciting. He speculated that this may solve many problems we have currently with using computer vision in difficult light conditions. In his example he showed the image recorded by a depth cam (based on time of flight) "filming" into direct sunlight - and as one would expect the sun does not matter. Looking forward to seeing some more results on this…

Looking at the current re-appearance of watch based computers and phones the idea of back side interaction becomes more and more interesting. In one design he showed how one can interact with a watch size device by interacting on buckle of the watch band. For more on back of device interaction see [2].

[1] Baudisch, P. and Rosenholtz, R. 2003. Halo: a technique for visualizing off-screen objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 - 10, 2003). CHI '03. ACM, New York, NY, 481-488. DOI= http://doi.acm.org/10.1145/642611.642695

[2] Baudisch, P. and Chu, G. 2009. Back-of-device interaction allows creating very small touch devices. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 1923-1932. DOI= http://doi.acm.org/10.1145/1518701.1518995

Opening of MobileHCI 2010

Over 300 participants from 5 continents came to Lisbon for MobileHCI 2010. This year's conference had a very high nigh number of submission and a very low acceptance rate (23% for full papers, below 20% for short papers). Looking back over the last 10 years it is amazing how this community has grown!

The statistics also shows the conference has a strong European focus (in contrast to CHI and UIST that are US dominated). Over the next years we want to make the Mobile HCI conference more international and attract more submissions and participation from around the world and in particular for the US. The plan is to have the conference 2012 in the US. Next year I will help (as paper co-chair) to run mobile HCI 2011 in Stockholm, Sweden. If you have ideas how to make the conference more attractive to people in America and Asia please let me know!

The deadline for submitting to mobile HCI 2011 is Janaury 28th, 2011.

SiMPE 2010, Keynote: Trends and Challenges in Mobile Interaction

I was invited to give a keynote talk at the 5th Workshop on Speech in Mobile and Pervasive Environments that was held as a part of ACM MobileHCI 2010 in Lisbon, Portugal. Over the last years we looked at speech as an additional modality in the automotive user interface domain; besides this my experience with speech interfaces is limited.

My talk, with the title "Trends and Challenges in Mobile Interaction" looked at different issues in mobile interaction. In some parts I reflected on modalities and opportunities for speech interaction.

When characterizing mobile interaction I pointed out the following points:
  • Interacting while on the move
  • Interaction is one of the user's tasks (besides others - e.g. walking, standing in a crowd)
  • Environment in which the interaction takes place changes (e.g. on the train with varying light and noise conditions)
  • Interruptions happen frequently (e.g. boarding the bus, crossing the road)
  • Application usage is short (typically seconds to minutes)
My small set of key issues and recommendations for mobile UIs is:
  • Simple and Understandable
  • Perceptive and Context-Aware
  • Unobtrusive, Embedded and Integrated
  • Low Cognitive Load and Peripheral Usage
  • Users want to be in Control (especially on the move)
The presentations and discussion at the workshop were very interesting and I got a number of ideas for multimodal user interfaces - including speech.

One issue that made me think more was the question about natural language speech vs. specific speech commands. A colleague pointed me to Speech Graffiti [1] / Universal Speech Interface at CMU. I wonder if it would make sense to invent a Human Computer Interaction language (with a simple grammar and a vocabulary) that we could teach in a course over several weeks (e.g. similar effort than touch typing on a QUERTY keyboard) or as a foreign language at school to have a new effective means for interaction. Could this make us more effictive in interacting with information? Or should we try harder to get natural languge interaction working? Looking at the way (experienced) people use Google we can see that people adapt very successfully - probably faster than systems improve…

From some of the talks it seems that "Push to talk" seems to be a real issue for users and a reason for many user related errors in speech systems. Users do not push at the appropriate time, especially when there are other tasks to do, and hence utterances are cut off at the start and end. I would guess continuous recording of the speech and using the "push to talk" only as an indicator where to search in the audio stream may be a solution.

[1] Tomko, S. and Rosenfeld, R. 2004. Speech graffiti vs. natural language: assessing the user experience. In Proceedings of HLT-NAACL 2004: Short Papers (Boston, Massachusetts, May 02 - 07, 2004). Human Language Technology Conference. Association for Computational Linguistics, Morristown, NJ, 73-76. http://www.cs.cmu.edu/~usi/papers/HLT04.pdf