Wednesday 10 November 2010

PARC - touching computing history

At PARC I had the chance to talk to people about some of our current projects. Les Nelson has done interesting work on public displays [1]. This work is highly relevant to ideas we pursue in the pdnet project and it was great to get a first person view from the researchers involved.

Being at PARC history of computing is all around you! Seeing the original Ethernet cable, tapes from Alan Kay or Lucy Suchman, the Alto computer, one of the original laser printer, and different Ubicomp artifacts from Mark Weiser's group really makes you feel that this is a special place for anyone interested in personal computing and ubicomp.

[1] Elizabeth F. Churchill, Les Nelson, and Gary Hsieh. 2006. Cafe life in the digital age: augmenting information flow in a cafe;-work-entertainment space. In CHI '06 extended abstracts on Human factors in computing systems (CHI '06). ACM, New York, NY, USA, 123-128. DOI=10.1145/1125451.1125481

Thursday 14 October 2010

Do tangible user interface make sense? Yes they are a great design tool.

The question "Do tangible user interface make sense?" is a question that probably everyone who seriously works in this field has asked themselves once in a while.

Seeing the iPhone and iPod app of the people doing the reactable made me think about this question again! What is really - in the use case of the reactbale the value of the physical over the touch screen? Or is it just sentimental and old school to believe in the physical? Not sure … needs probably some more thinking and research ;-)

One other points which this example underlines is that tangible interaction is a great design tool (still in the process of writing a paper about this - but here the basic idea for discussion). And I strongly believe that this is a great value for user interface design in general. I suggest the following approach:
  1. Analyze your task
  2. Find data elements that can be made tangible
  3. Find operators/manipulators on the data elements that can be made tangible
  4. Create a tangible user interface to realize all the interaction required
  5. Port it to a touch screen or conventional user interface
The steps 1-4 will ensure simplicity and in step 5 you may lose some of the "ah" and "wow" but it is very likely that you have created a usable and simple interface!

Wednesday 13 October 2010

Will social science change completely?

Seeing the recent post on (Gay Sex vs. Straight Sex) made me think if we are approaching a point where our understanding of society will massively change (hopefully for the good) and where we will get much greater insights in who we are. Is this similar to the era of the invention of the microscope? Things become visible and one does not need to guess anymore?

The amount of data collected on websites is huge - and in many cases the data is probably of very high quality as it matter to people who contributed it (probably higher than what you get with a random questionnaire) . I think this is exciting and looking at some of our project proposals going beyond explicit data collection to implicit data collection may even make this approach stronger (adding another x10 on the new microscopes).

Friday 8 October 2010

Competitions in computer science for schools

Spending 3 intensive days at the University of Freiburg as member of the jury at the finals of the German computer science competition (Bundeswettbewerb Informatik) I learned once more how vast our field is … especially at the theoretical end. The tasks on the first day were related to stream processing algorithms and on the second day to games on graphs. But don't be fooled theoreticians have a very different understanding what a good game is ;-)

The 28 people (pupils and high school students - who have not yet started studying) at the finals are the "best" from over 1000 participants and had successfully passed two rounds before. There level of CS knowledge was massively impressive. Many of them would have passed the BSc exams - in Math and theoretical computer science - without much further preparation! The event showed that computer science has a great potential to attract young people.

Here are links to German competions:
  • Informatik Biber (the general CS completion for students from class 5-13, last year some 80.000 pupils took part)
  • Bundeswettbewerb Informatik (the more difficult completion, last year bit more that 1000 pupils took part)
Around the event there were some interesting demos (to impress the prospective students), including Toyota Robina and an autonomous mini-airship.

Friday 1 October 2010

Two automotive deadlines today!

Today (Friday, October 1st 2010) there are two deadlines related to automotive computing research:
For the AutomotiveUI 2010 the program looks really exciting, see the conference web page. John Krumm is giving the keynote and the program includes 26 papers (if I counted right) in the areas of: Attention and Distraction, Speech and Sound, Exploring Modes of Interaction, Supporting the Driver, and Connected Cars.

With regard to the special issue I heard that there is a chance to get a few days extension ;-)

Monday 27 September 2010

Ubicomp 2010

Today the 12th international conference on Ubiquitous Computing (ubicomp2010) started in Copenhagen. The conference is very competitive showing a wide range of work in the space of computing beyond the desktop. This year 39 of 202 papers and notes were accepted in the main program. In this part of the program there is a focus of work from North America (which seems to go together with conferences becoming ACM conferences).

The opening keynote was by Morton Kyng on "Making dreams come true - or how to avoid a living nightmare". In his talk he outlined his view on palpable computing which basically described user centered development of pervasive systems.

This years Ubicomp has a large number of demos and it was fun to engage with these and with the people presenting them. Christian Winkler from our group had an invited demo on "Sense-sation: An Extensible Platform for Integration of Phones into the Web" showing a combined web and mobile phone platform that eases the development of applications that run across several phones. For example is it very easy to create an application where you have a map interface and you can mark an area on the map and request that each of the devices currently in this area is going to take a photo and sent it back (given that the devices run the platform and that you have the right to use the camera on these phones). There will be a full paper on this in a few weeks published at the Internet of Things Conference in Japan and you can already check out the web page:

As Ubicomp is not held at a hotel (which I like) there is also no conference hotel with a default bar. Hence the organziers name a Ubicomp 2010 bar: Nyhavn 17. I think this is a good idea!

Sunday 26 September 2010

Ubicomp 2010 Workshop: Ubiquitous Computing for Sustainable Energy (UCSE2010)

Together with Adrian I organized a workshop at Ubicomp2010 in Copenhagen on Ubiquitous Computing for Sustainable Energy. The motivation for this were for me the question (1) if ubicomp can help to make energy provision more sustainable and (2) what are the central areas where ubicomp technologies can help. Over the last years we have seen a lot of example of motivational technologies - which I am not convinced of. For me the example of standby power is symptomatic. There was a lot of discussion how to reduce the standby consumption motivating people to actively do it and providing more awareness about energy consumption. This lead to a number of academically interesting investigations and prototype making people more aware of their consumption (e.g. the power aware cord)- however to me they do not make a real difference yet. A "simple law" (as we have recently seen in Europe, COMMISSION REGULATION (EC) No 1275/2008 following Directive 2005/32/EC) saying that you do not get the CE-certification for your device if it exceeds a certain power in standby did the job - at least in Europe. Within a few month all TVs that I have seen being advertised were below 1W standby consumption.

If you are more interested in the topic please have a look at the workshop web page. There are also the online proceedings available as well as some results of the discussion. During the workshop we got some feedback on facebook, a colleague stated: "if we didn't have ubiquitous computing, our energy situation would be more sustainable ... every time, for instance, a customer upgrades their mobile - iphone 5, anyone, the energy waste is huge". I think that is a really important and valid comment, and I made the following reply "it is more complicated than that, e.g. how does this change if you use public transport instead of your Hummer (=personal lorry) because of your iPhone 5 ;-) or as you do your email on the iPhone and hence do not have a PC at home anymore ... to be more serious one of the questions we posed the questions if sustainability is a CS topic and in what sense (or if this is rather a political questions)". Adrian added a further response: "consumerism clearly has a lot to answer for. If we didn't have conference travel, or didn't submit the papers in the first place? ... :-) I'm sure you know: Elaine M. Huang, Khai N. Truong's CHI 2008 paper: Breaking the Disposable Technology Paradigm..." [1]. We continued this discussion over dinner and I think the ultimate answer is to go towards a live style of reduced consumption - but I expected this would crash our current economic system…

Coming out of the restaurant we saw an impressive firework and it seemed people (including me) liked it and we did not really think about wasting resources and polluting the environment for a short display…

[1] Huang, E. M. and Truong, K. N. 2008. Breaking the disposable technology paradigm: opportunities for sustainable interaction design for mobile phones. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 323-332. DOI=

Wednesday 22 September 2010

PD-NET web page and facebook page

For our European FET open project PD-NET we have a web page at online. There we describe the project and there is already the first publication (a paper we will present at ACM Multimedia 2010) listed. The page also includes a description of the project and the partners. If you work on public displays or if you are interested in this topic (e.g. as developer, service provider, device manufacturer, content designer, or user) please feel free to contact us. Perhaps there is chance for collaboration.

The project objectives are:
  • To create enabling technologies for large-scale pervasive display networks through the design, development and evaluation of a robust, scalable, distributed and open platform for interconnecting displays and their sensors.
  • To establish Europe as the international centre for work on pervasive display networks.
  • To address key scientific challenges that may inhibit the widespread adoption of pervasive display network technology: Tensions between privacy and personalization, situated displays, business and legislative requirements, User Interaction.

To share resources and ideas with a community of researchers interested in the topic we have setup a facebook page: There is one album where we hope to collect a larger number of photos of public displays (traditional and digital) from around the world to document the variety of public displays and people’s interaction with them. If you have interesting photos of public displays please share them on the page. We also encourage researchers publishing in this research field to share their publications (or links to the publications) with the community (and it is obvious that everyone will cite you ;-)

Monday 20 September 2010

Interviewing with Rikke Friis Dam and Mads Søgaard

Rikke Friis Dam and Mads Søgaard are currently working on a re-launch of the website The side has over the last years involved in a useful resource for researchers and practitioners in human computer interaction and interaction design. There is a very comprehensive calendar that includes most relevant events in HCI.

With their current work Rikke and Mads pursue a mission to create a new and free resource for teaching and learning interaction design and HCI. In a first step they work with researchers (like myself) around the world that are experts on a certain topic (in my case context-awareness and implicit interaction) to create new teaching materials. This includes a chapter (about 3000 words) that has tutorial character and interviews in which specific topics are discussed in more details.

It was great fun to work with them and I look forward to seeing the new material online.

Thursday 16 September 2010

CFP - new conference series: Augmented Human

From the cfp
"The second Augmented Human (AH) International Conference will be held
in Tokyo Water Front on March 12th, 13th and 14th 2011
Full information on:

The AH international conference focuses on scientific contributions
towards augmenting humans capabilities through technology for
increased well-being and enjoyable human experience. The topics of interest include, but are not limited to: Augmented and Mixed
Reality, Internet of Things, Augmented Sport, Sensors and
Hardware, Wearable Computing, Augmented Health, Augmented
Well-being, Smart artifacts & Smart Textiles, Augmented Tourism
and Games, Ubiquitous Computing, Bionics and Biomechanics
Training/Rehabilitation Technology, Exoskeletons,
Brain Computer Interface, Augmented Context-Awareness,
Augmented Fashion, Safety, Ethics and Legal Aspects,
Security and Privacy Aspects"

Sounds exciting! The majority of researchers in the PC are from France and Japan.

Important Dates:
December 23rd 2010, paper submission deadline
January 22nd 2011, author notification
March 12th/13th/14th 2011, Conference in Tokyo

Wednesday 15 September 2010

Comprehensive and modern German book on HCI by Bernhard Preim and Raimund Dachselt

After having read an electronics preprint some weeks ago Bernhard and Raimund showed me the first printed version of their book "Interaktive Systeme 1: Grundlagen, Graphical User Interfaces, Informationsvisualisierung, Mobile Interaktion" (amazon, springer). The new edition brings a comprehensive overview of human computer interaction with many illustrations and examples. It is perfectly suited for teaching.

Congratulation to finishing it! I am always amazed how people manage to create these books without locking themselves away from the world for a year…

Ed H. Chi visiting our Lab

We had the great pleasure to have Ed H. Chi from PARC for the afternoon and evening in Essen. Ed gave a keynote at Mensch und Computer 2010 and we took the chance to discuss some of our work with him. From eye-gaze based security, to social TV and vibration notifications he got a broad overview of the work we currently do.

Ed, thanks again for the feedback and the many ideas you shared with us!

On the way at Essen train station we could show a real world example that usability is not restricted to computers and that common sense is not enough to design usable environments. If you are between track 1/2 and 3/4 and you look towards the main electronic train time table you better be not taller than 10 cm - otherwise the traditional signs will block your view (an so far I have not seen such small people inEssen).

Tuesday 14 September 2010

Keynote: Steve Benford talking on "Designing Trajectories Through Entertainment Experiences"

On Tuesday morning Steve Benford presented the entertainment interfaces keynote. He is interested in how to use computer technology to support performances. Steve works a lot with artist group, where the University is involved in implementing, running and studying the experiences. The studies are typically done by means of ethnography. The goal of this research is to uncover the basic mechanisms that make these performances work and potentially transfer the findings to human computer interaction in more general.

I particularly liked the example of "Day of the figurines". Steve showed the video of experiences they created and discussed the observations and findings in detail. He related this work to the notion of trajectories [1], [2]. He made the point that historic trajectory are especially well suited to support spectators.

Some years back I worked with Steve in the Equator and we even have a jointed publication [3] :-) When looking for these references I came across another interesting paper - related to thrill and excitement, which he discussed in the final part of the talk [4].

PS: we had a great party on Monday night but the attendance was extremly good :-)

[1] Benford, S. and Giannachi, G. 2008. Temporal trajectories in shared interactive narratives. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 73-82. DOI=

[2] Benford, S., Giannachi, G., Koleva, B., and Rodden, T. 2009. From interaction to trajectories: designing coherent journeys through user experiences. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 709-718. DOI=

[3] Benford, S., Schnädelbach, H., Koleva, B., Anastasi, R., Greenhalgh, C., Rodden, T., Green, J., Ghali, A., Pridmore, T., Gaver, B., Boucher, A., Walker, B., Pennington, S., Schmidt, A., Gellersen, H., and Steed, A. 2005. Expected, sensed, and desired: A framework for designing sensing-based interaction. ACM Trans. Comput.-Hum. Interact. 12, 1 (Mar. 2005), 3-30. DOI=

[4] Schnädelbach, H., Rennick Egglestone, S., Reeves, S., Benford, S., Walker, B., and Wright, M. 2008. Performing thrill: designing telemetry systems and spectator interfaces for amusement rides. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 1167-1176. DOI=

Monday 13 September 2010

Opening Keynote of Mensch&Computer 2010 by Ed H. Chi

Ed H. Chi from PARC presented the opening keynote for Mensch&Computer 2010. In the motivation of the talk he showed a document on "Applied Information processing psychology" from 1971 - probably very few had seen this before. It makes an argument for an experimental science that is related to augmented cognition. The basic idea is very similar to Vannevar Bush's Memex - to extend the human cognitive power by machines (and especially computer technology). It is apparent that these ideas became the backdrop of the many innovations that happened at PARC in the early days.

Ed stressed that there is still a lot of potential for the application of psychological phenomena and models to human computer interaction research. As an example he used the idea that speech output in a navigation system could use your name in an important situation making use of the attenuation theory of attention (the cocktail party effect). By hearing your name you are more likely to listen - even if you are yourself in a conversation. The effect may be stronger if the voice is your mother's voice ;-)

The main part of the talk centered on model driven research in HCI. Using the ScentHighlights [1] examples he outlined the process. I liked very much the broad view Ed has on models and the various uses of models he suggested, e.g. generative models that generate ideas; or behavioral models that lead to additional functionalities (as example he used: people are sharing search results in google, hence sharing should be a basic function in a search tool). Taking the example of Wikipedia he showed how models can be used to predict interaction and growth. I found the question on the growth of knowledge very exciting. I think it is defiantly not finite ;-) otherwise research is a bad career choice. Looking at the Wikipedia example it is easy to imagine that the carrying capacity is a linear function and hence one could use a predictive function where a logistic growth curve is overlayed with a linear function.

Random link from the talk:

Ed discussed yahoo's social pattern library:
This pattern library is pretty interesting. I found the reputation pattern pretty comprehensive. It seems that this library is now comprehensive enough for using it for real and in teaching.

[1] Chi, E. H., Hong, L., Gumbrecht, M., and Card, S. K. 2005. ScentHighlights: highlighting conceptually-related sentences during reading. In Proceedings of the 10th international Conference on intelligent User interfaces (San Diego, California, USA, January 10 - 13, 2005). IUI '05. ACM, New York, NY, 272-274. DOI=

Sunday 12 September 2010

Mensch und Computer 2010 at the University of Duisburg-Essen

The German HCI conference Mensch und Computer 2010 started today. Under a single roof - called interactive culture - three more conferences are co-located: the German UPA track, the German E-learning conference, and a track on entertainment interfaces. The size of the conference is with about 500 people impressive and it shows that interactive computing and user experience has become a major field in Germany - in academia as well as in industry - and I am proud to have chaired the paper program for Mensch&Computer together with Jürgen Ziegler.

On Sunday we ran a number of workshops: Mobile HCI (by Enrico Rukzio), Methods and Tools in HCI (by Nicole Krämer), Web 2.0 and CSCW (by Tom Gross), and on Writing scientific papers (by Geraldine Fitzpatrick). I enjoyed myself attending two of the tutorials and I have to admit I learned interesting things :-) and got ideas for my own teaching.
The paper program starting on Monday was selective as we had 119 submissions (full and short papers) and the committee chose 41 to be presented at the conference (is about 34% acceptance rate).

A restaurant to remember (in a very positive sense): Dreigiebelhaus.

Friday 10 September 2010

Our Paper on Mobile Product Review Systems at Mobile HCI 2010

Felix von Reischach investigated in his PhD mobile product review systems. The paper [1] he presented at mobile HCI 2010 in Lisbon compares different modalities for product reviews and recommendations. In particular we looked at the following modalities: discrete scale (stars), text, and video. In a study at the SAP retail lab we compared how easy it was for participants to create reviews in each of the modalities and how much they like creating these. Additionally we also compared which modalities are most liked by people in a buying situation and which type of review the trust. Interestingly a star rating scheme is most liked - for input and output.
Our general recommendation is to allow users to rate products on a scale (e.g. using stars) in different, potentially user defined categories. For a more detailed discussion see the paper [1].

The evening event of was at Palácio da Pena, Sintra - a castle close to Lisbon. The view and the food were magnificent - it felt like a real treat after the one hour walk up the steep hill.

[1] von Reischach, F., Dubach, E., Michahelles, F., and Schmidt, A. 2010. An evaluation of product review modalities for mobile phones. In Proceedings of the 12th international Conference on Human Computer interaction with Mobile Devices and Services (Lisbon, Portugal, September 07 - 10, 2010). MobileHCI '10. ACM, New York, NY, 199-208. DOI=

Research has shown that product reviews on the Internet not only support consumers when shopping, but also lead to increased sales for retailers. Recent approaches successfully use smart phones to directly relate products (e.g. via barcode or RFID) to corresponding reviews, making these available to consumers on the go. However, it is unknown what modality (star ratings/text/video) users consider useful for creating reviews and using reviews on their mobile phone, and how the preferred modalities are different from those on the Web. To shed light on this we conduct two experiments, one of them in a quasi-realistic shopping environment. The results indicate that, in contrast to the known approaches, stars and pre-structured text blocks should be implemented on mobile phones rather than long texts and videos. Users prefer less and rather well-aggregated product information while on the go. This accounts both for entering and, surprisingly, also for using product reviews.

Thursday 9 September 2010

Tobii Mobile Eye-Tracker

After quite some time of announcements and paper presentation Tobii showed a beta version of their mobile eye tracker at Mobile HCI 2010. It is small and lightweight but requires a import phase after recording. It seems well suited to do classical studies (e.g. you have people doing some task and you analyse afterwards what they looked at) but of little use for interactive application (e.g. the system reacts in real-time to the users gaze). Technically the glasses record the video and gaze information on a memory card. After the recording this needs than to be "imported" on the PC. After the import the gaze is superimposed over the recorded video. Tobii also provides active markers that can be attached to objects and regions to ease post processing and analysis. The price tag in Europe is just below 20.000€. SMI has a similar product: IVIEW X HED.

Keynote by Josh Ulm at Mobile HCI 2010

Josh Ulm discussed how branding (and marketing in more general) has evolved and how this is now central to user experience and user interface design. He started out with showing how Nike changed marketing with the "Just Do It" campaign. He suggested that this was a transition from a product focus to a personal usage focus asking "who you become if you use the product".

Moving towards more resent trends he argued that the iPod made the interaction with the product the essential part of the branding. With examples such as eBay and Google he showed that interaction in combination with information presentation becomes that discriminating factor and the way these brands define themselves. Overall this suggests that the user interaction and user experience is the central part for making a brand to stand out.

Standing out is not sufficient however. The experience has to be ownable. Using zappos as an example he showed how such an experience needs to be consistent across all touch points with the user - especially if you are defining your brand by experience - in short a brand has to have a unique user experience that is associated with the brand only. To achieve this there are three ingredients:
  • Values - "values have to extent into every single touchpoint of the experience"
  • Differentiation "you need to stand out in the market place" - many companies do not innovate enough, you have to take risks to stand out because you have to stand out a lot to be different - most companies do only innovate a little
  • Integrity - internal consistency - the brand is about what customers really touch - what reaches the customers - can a customer recognize that this is your experience, detail matter
To be successful an experience needs to be ownable and good. You need both - one is not enough. As a concluding example he showed Vodafone 360 - (which very few people in the audience knew). My short assessment is that the brand failed on "Differentiation" - to me (and obviously I would expect Josh Ulm is not agreeing) it is too similar to other things in this space…

As a further example of a strong Differentiation he showed Jeff Fong's Metro UI for the new Windows mobile phone platform. If you have not seen the design check this out. It is very different from current Android and iPhone UIs. Everyone is really curious how it will do in the market…

Tuesday 7 September 2010

Opening Keynote of Mobile HCI by Patrick Baudisch

Patrick Baudisch presented the opening Keynote. He gave an interesting overview of work on interaction with small screen devices. It is very apparent that currently the form factor of devices is nearly exclusively determined by the I/O components. Throughout the examples it became very explicit, that the usefulness of a device for certain tasks it determined by the options for interaction. E.g. by increasing the input precision or by reducing the fat finger problem new tasks (and eventually new applications) will become possible on mobile devices. Patrick showed an example where he asks his students, to design a UI for a device as small as a one Euro coin (e.g. digital jewelry) - I think I borrow this idea for our next course :-)

How showed many examples of his work - too many to write it in a blog entry - but well documented on his web page. It is a page worthwhile to check regularly as his group is outstandingly creative and productive.

Patrick made an insightful analogy between small screens and theater stage, arguing that the both have a similar problem: the lack space to show everything you would like to show due to a constraint space. The solutions in the theater can give a good inspiration for designing mobile UIs. He mentioned three typical solutions used in the theater:
  • Partially out of the frame - you show something only to a small part and the user's imagination will fill in for the rest. In the theater an example is a ocean liner where you only see a very small part but people imagine the rest. The halo visualization technique [1] is one example for this approach on a mobile device.
  • In and out points - people enter and leave the stage at given points providing an illusion of a comprehensible world around the stage (and in the UI case around the screen)
  • Direction of interaction to invisible targets - by talking/pointing/looking into a certain direction actors can create an impression of interacting with someone at a certain location - who is not really there and who's position is off the stage.
Technically I found the video he showed on using a depth cam extremely exciting. He speculated that this may solve many problems we have currently with using computer vision in difficult light conditions. In his example he showed the image recorded by a depth cam (based on time of flight) "filming" into direct sunlight - and as one would expect the sun does not matter. Looking forward to seeing some more results on this…

Looking at the current re-appearance of watch based computers and phones the idea of back side interaction becomes more and more interesting. In one design he showed how one can interact with a watch size device by interacting on buckle of the watch band. For more on back of device interaction see [2].

[1] Baudisch, P. and Rosenholtz, R. 2003. Halo: a technique for visualizing off-screen objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 - 10, 2003). CHI '03. ACM, New York, NY, 481-488. DOI=

[2] Baudisch, P. and Chu, G. 2009. Back-of-device interaction allows creating very small touch devices. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 1923-1932. DOI=

Opening of MobileHCI 2010

Over 300 participants from 5 continents came to Lisbon for MobileHCI 2010. This year's conference had a very high nigh number of submission and a very low acceptance rate (23% for full papers, below 20% for short papers). Looking back over the last 10 years it is amazing how this community has grown!

The statistics also shows the conference has a strong European focus (in contrast to CHI and UIST that are US dominated). Over the next years we want to make the Mobile HCI conference more international and attract more submissions and participation from around the world and in particular for the US. The plan is to have the conference 2012 in the US. Next year I will help (as paper co-chair) to run mobile HCI 2011 in Stockholm, Sweden. If you have ideas how to make the conference more attractive to people in America and Asia please let me know!

The deadline for submitting to mobile HCI 2011 is Janaury 28th, 2011.

SiMPE 2010, Keynote: Trends and Challenges in Mobile Interaction

I was invited to give a keynote talk at the 5th Workshop on Speech in Mobile and Pervasive Environments that was held as a part of ACM MobileHCI 2010 in Lisbon, Portugal. Over the last years we looked at speech as an additional modality in the automotive user interface domain; besides this my experience with speech interfaces is limited.

My talk, with the title "Trends and Challenges in Mobile Interaction" looked at different issues in mobile interaction. In some parts I reflected on modalities and opportunities for speech interaction.

When characterizing mobile interaction I pointed out the following points:
  • Interacting while on the move
  • Interaction is one of the user's tasks (besides others - e.g. walking, standing in a crowd)
  • Environment in which the interaction takes place changes (e.g. on the train with varying light and noise conditions)
  • Interruptions happen frequently (e.g. boarding the bus, crossing the road)
  • Application usage is short (typically seconds to minutes)
My small set of key issues and recommendations for mobile UIs is:
  • Simple and Understandable
  • Perceptive and Context-Aware
  • Unobtrusive, Embedded and Integrated
  • Low Cognitive Load and Peripheral Usage
  • Users want to be in Control (especially on the move)
The presentations and discussion at the workshop were very interesting and I got a number of ideas for multimodal user interfaces - including speech.

One issue that made me think more was the question about natural language speech vs. specific speech commands. A colleague pointed me to Speech Graffiti [1] / Universal Speech Interface at CMU. I wonder if it would make sense to invent a Human Computer Interaction language (with a simple grammar and a vocabulary) that we could teach in a course over several weeks (e.g. similar effort than touch typing on a QUERTY keyboard) or as a foreign language at school to have a new effective means for interaction. Could this make us more effictive in interacting with information? Or should we try harder to get natural languge interaction working? Looking at the way (experienced) people use Google we can see that people adapt very successfully - probably faster than systems improve…

From some of the talks it seems that "Push to talk" seems to be a real issue for users and a reason for many user related errors in speech systems. Users do not push at the appropriate time, especially when there are other tasks to do, and hence utterances are cut off at the start and end. I would guess continuous recording of the speech and using the "push to talk" only as an indicator where to search in the audio stream may be a solution.

[1] Tomko, S. and Rosenfeld, R. 2004. Speech graffiti vs. natural language: assessing the user experience. In Proceedings of HLT-NAACL 2004: Short Papers (Boston, Massachusetts, May 02 - 07, 2004). Human Language Technology Conference. Association for Computational Linguistics, Morristown, NJ, 73-76.

Saturday 28 August 2010

Lab visit in Chengdu, University of Electronic and Science Technology of China

On the final day of the Sino-German Symposium on Wearable Computing in Chengdu Prof. Dongyi Chen invited us to see his lab. We drove to the new campus of the University of Electronic and Science Technology of China. Already the drive was impressive seeing the amount of building work happening in Chengdu, especially in the high tech area.

In the School of Computer Science and Engineering we visited a computer lab and got to see very interesting student projects in Prof. Dongyi Chen labs. The demos included a wireless controlled vehicle were the control is implemented on a mobile phone, industrial settings control applications using sensor nets, table top user interfaces and augmented reality applications on the table, different applications for wearable displays, and a wrist worn computer (developed from scratch).

The quality of the work by the students is impressive and so is the university campus (building and facilities). It shows a very clear determination to push science and education. We should probably talk to our government to consider investing more in research and higher education…
I hope this symposium will help us to start some more collaboration. As a next step we plan a summer school on Human Computer Interaction next year in Germany.

Friday 27 August 2010

Public Displays in Chengdu

This is a post to document the transition of public display systems around the work (hopefully there are more to follow). In the European pdnet project we investigate and develop a new communication medium based on public displays. The transformation that takes place in this domain is extremely quick, and hence I think it is interesting to keep some record on specific displays.I found it very interesting to see that there are many simple animations used in simple displays when they become digital. One example is the use of animation in traffic lights. There are two types of dynamic information: informative (e.g. showing the count-down to the next green light) and decorative (e.g. moving bicycle wheels as shown in the video).Currently there is a mix of traditional painted/printed displays, illuminated static displays, illuminated displays that change between discreet presentations and fully digital high resolution displays. The digital advertising displays in the inner city center of Chengdu are impressive. Looking at the photos it could be anywhere in world. It seems that public displays lose more and more their local character - very different from 20 years ago when they were still painted/printed in most places around the world.

Live experience - media consumption is social

Before the dinner I decided to try a Chinese massage and it was astonishingly relaxing. It is one of those reminders that there are many things we need to experience and there is just no other way (at least so far) to gain a similar understanding…

The show after dinner showed to me how much a live presentation of artistic and musical performance transmits - it is so much richer than conserved/recorded media. Take as an example the shadow play - I really enjoyed it as live performance. In comparison to 3D animation movies it has little fidelity but it still works extremely well to engage people in the live presentation. But I could not imagine that I would watch it on TV - hence we probably miss something in creating the experience when playing/presenting conserved media. I would expect there is a lot potential in creating a social situation for digital media consumption that could improve the experience.

Chinese German Symposium on Wearable Computing in Chengdu

The Sino-German Symposium on Wearable Computing in Chengdu provided an interesting opportunity to get together with colleagues in China that work on similar topics.
My talk was entitled "Interaction on the Move - Wearable User Interfaces" and look at a very high level perspective at mobile and wearable interaction. As the main objective of the symposium is to initiate collaboration I also included some slides on the other work we are doing.

Bernt Schiele looked back on his early work in 1998 at MIT with Sandy Pentland and reflected on how wearable computing has evolved. It seems than many of the scenarios that were originally envisioned are now realized on smart phones and it seems that an active usage model - people taking the device explicitly instead of having something that is always on and in their face. However looking back on some of the early visions (continuous capture, contextual support) they are still attractive and the technology may be there to realize them for real. One example of a system that could now be easily realized and may have a growing market could be a device similar to the StartleCam [1], providing personal safety services.

Feng Tian, one of the top HCI researchers in China gave an overview of their current work which I found very exciting (especially the projects related to sports and education). Hopefully there is a chance for future collaboration.

There is more information on the symposium and on the Sino-German collaboration at

[1] Healey, J. and Picard, R. W. 1998. StartleCam: A Cybernetic Wearable Camera. In Proceedings of the 2nd IEEE international Symposium on Wearable Computers (October 19 - 20, 1998). ISWC. IEEE Computer Society, Washington, DC, 42.

In the talks and conversation I saw a set of technologies I like to remember (and share) some of them:

Monday 23 August 2010

Decorative Displays in Zürich Railway Station

In the railway station in Zürich is a display that consists of 25.000 light units (=10x50x50 - by my counting). It seems that for each unit the color can be set. As there is some space between the lights one can see the "hidden" layers which makes it a sort of 3D image. So far it seems to be used only as an artistic/decorative display - or I did not get the meaning. There is also still "conventional" art in the railway station…

I recognized an interesting effect of human behavior. I looked at the electronic display for about 2 minutes and nobody took a photo of it. When I started taking the photos there were within 30 seconds 10 other people starting to take photos ;-) Perhaps we should create an application that makes this easier - consiting of three parts: (1) if someone takes a photo the photo application broadcasts this event, (2) an application running in the back ground monitoring when others take photos and records the location and (3) a page, folder or dynamic query that shows fotos for this location (and perhaps sorted by time difference to the recording of the event).