Sunday, 27 July 2008

Emmy Noether Meeting in Postdam

From Friday to Sunday I was in Potsdam for the yearly Emmy Noether meeting organized by DFG (German Science Foundation). The Emmy Noether Program seems to me one of the most attractive funding options for early career researchers I know of (world wide).

This year I was in the preparation team for the meeting and was co-organizing one workshop where we discussed experiences with research funding in other countries and what ideas for improving current programs we can see. The workshop was on how we can benefit from having students rather than seeing them as “teaching load” – especially in computer science (I organized it with Andreas Butz).

The meeting is always very interesting as it brings together people, funded by DFG in the Emmy Noether Program, across all disciplines. In the political evening we had a keynote by Professor Hans Weiler looking at the current problems of the German systems ("Eliten im Wettbewerb - Die deutschen Hochschulen und die internationale Konkurrenz" ) – one message I saw in the talk is that Germany is very efficient – looking how little money is spend in education and research the outcome is surprising. (But this is only a positive message if we do not want to play a leading role in the world of science and technology). It became very clear that the overall system lacks massively in funding. The additional funding that is provided by the German Government in the widely publicized call for elite universities (Exzellenzinitiative) is 1900 million Euros over 5 years (about 5€/citizen/year)– impressive? Not really – this less than the amount projected for the “Yale Tomorrow” campaign – a 5 year program in fundraising by a single University in the US. And Stanford University has even a bigger campaign as Prof. Weiler told – and there are a few other Universities in the league in the US...

Wednesday, 16 July 2008

GPS monitoring for car insurance

In my talk at ISUVR2008 I referred to an example where an insurance is monitor driving behavior and makes a tariff according to this. Some people asked me for more details and
references, here they are…

My example was based on the pilot announced from the German insurance WGV. They planned to run a pilot with 1500 people using a GPS based monitoring devices. The box is mounted in the car and compares the current speed with the allowed speed limit and warns to reduce speed (if over the limit). If the driver is more than 12 times per year over the speed limit (basically ignoring the warning) he does not get the reduced rate. (see http://www.wgv-online.de/docs/youngandsafe.pdf - in German only). In the announcement it said they will run the pilot to 2009...

There are different ideas how to take GPS driving monitoring beyond the lab, e.g. in 2007 Royal & SunAlliance announced a GPS-based eco car insurance and the AIG a Teen GPS Program – targeted at parents.

Looking at different comments (on news pages and in blogs) it seems that people's opinions are very split...

Sunday, 13 July 2008

Thermo-imaging camera at the border – useful for Context-Awareness?

When we re-entered South Korea I saw guard looking with an infrared camera at all arriving people. It was very hot outside so the heads were very red. My assumption is that this is used to spot people who have fever – however I could not verify this.

Looking at the images created while people moved around I realized that for many tasks in activity recognition, home health care, and wellness this may be an interesting technology to use. For several tasks in context-awareness it seems straightforward to get this information from an infrared camera. In the computer vision domain it seems that there have several papers towards this problem over the recent years.

We could thing of an interesting project topic related to infrared activity recognition or interaction to be integrated in our new lab… There are probably some fairly cheep thermo-sensing cameras around to used in research – for home brew use you find hints on the internet, e.g. How to turn a digital camera into an IR cam – pretty similar to what we did with the web cams for our multi-touch table.

The photo is from http://en.wikipedia.org/wiki/Thermography

Trip to North Korea

[see the whole set of photos from tour to North Korea]

From Gwangju we took the bus shortly after midnight to go for a trip to North Korea. The students did a great job in organizing ISUVR and the trip. It was great to have again some time to talk to Yoosoo Oh, who was a visiting researcher in Munich in our group.

When entering North Korea there are many rules, including that you are not allowed to take cameras with tele-lenses over 160mm (so I had to take only the 50mm lens) and you must not bring mobile phones and mp3 players with you. Currently cameras, phones and MP3 players are visible with the human eye and to detect in an x-ray. But it does not take much imagination to see in a few years extremely small devices that are close to impossible to spot. I wonder how this will change such security precautions and whether or not I will in 10 years still possible to isolate a country from access to information. I doubt it…

The sightseeing was magnificent – see the photos of the tour for yourself. We went onto the Kaesong tour (see http://www.ikaesong.com/ - in Korea only) It is hard to tell how much of the real North Korea we really saw. And the photos only reflect a positive selection of motives (leaving out soldiers, people in town, ordinary buildings, etc. as it is explicitly forbidden to take photos of those). I was really surprise when leaving the country they check ALL the pictures you took (in my case it took a little longer as it was 350 photos).

The towns and villages are completely different from what I have seen so far. No cars (besides police/emergency services/army/tourist busses) – but many people in the street walking or cycling. There were some buses in a yard but I have not seen public transport in operation. It seemed the convoy of 14 tourist buses is an attraction to the local people…

I have learned that the first metal movable type is from Korea – about 200 years before Gutenberg. Such a metal type is exhibited in North Korea and in the display is a magnifying glass in front of the letter – pretty hard to take a picture of…

Saturday, 12 July 2008

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity…

As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert's talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI '04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 - 29, 2004). CHI '04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)


Albrecht Schmidt – Magic Beyond the Screen

I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.

In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht's talk, Slides in PDF)


Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan's talk)


Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin's talk)


Steven Feiner – AR for prototyping UIs
Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].

In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens's talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI '06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827

[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing :-)

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces - working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 - Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

Friday, 11 July 2008

Korean Dinner - to many dishes to count

In the evening we had a great Korean dinner. I enjoyed it very much – and I imagine we have seen everything people eat in Korea – at some point I lost count of the number of different dishes. The things I tasted were very delicious but completly different to what I typically eat.

Dongpyo Hong convinced me to try a traditional dish (pork, fish and Kimchi) and it was very different in taste. I was not adventures enough to try a dish that still moved (even though the movement was mariginal – can you spot the difference in the picture) – but probably I missed something as Dongpyo Hong enjoyed it.

I made some photos from the conference dinner.

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad's talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147



Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds's talk)



Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent's talk)


Jun Park - bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.


The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun's talk)


Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

Thursday, 10 July 2008

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos - ISUVR2008 - GIST - Korea

Embedded Information - Airport Seoul

When I arrived in Seoul at the airport I saw an interesting instance of embedded information. In Munich we wrote a workshop paper [1] about the concept of embedded information and the key criteria are:
  • Embedding information where and when it is useful
  • Embedding information in a most unobtrusive way
  • Providing information in a way that there is no interaction required

Looking at an active computer display (OK it was broken) that circled the luggage belt (it is designed to list the names of people who should contact the information desk) and a fixed display on a suitcase I was reminded of this paper. With this set-up people become aware of the information – without really making an effort. With active displays becoming more ubiquitous I expect more innovation in this domain. We currently work on some ideas related to situated and embedded displays for advertising – if we find funding we push further… the ideas are there.

[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop 'Ubiquitous Display Environments', September 2004

Visitors to our Lab

Christofer Lueg (he is professor at the School of Computing & Information Systems at the University of Tasmania) and Trevor Pering (he is a senior researcher at Intel Research in Seattle) visited our lab this week. The timing is not perfect but at I am not the only interesting person in the lab ;-)

Together with Roy Want and others Trevor published some time ago an article in the IEEE Pervasive Magazine that is still worthwhile to read “Disappearning Hardware” [1]. It shows clearly the trend that in the near future it will be feasible to include processing and wireless communication into any manufactured product and outlines resulting challenges. One of those challenges which we look into in our lab is how to interact with such systems… Also in a 2002 paper Christopher raised some very fundamental questions how far we will get with intelligent devices [2].

[1] Want, R., Borriello, G., Pering, T., and Farkas, K. I. 2002. Disappearing Hardware. IEEE Pervasive Computing 1, 1 (Jan. 2002), 36-47. DOI= http://dx.doi.org/10.1109/MPRV.2002.993143

[2] Lueg, C. 2002. On the Gap between Vision and Feasibility. In Proceedings of the First international Conference on Pervasive Computing (August 26 - 28, 2002). Lecture Notes In Computer Science, vol. 2414. Springer-Verlag, London, 45-57.

Saturday, 5 July 2008

How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

Thursday, 3 July 2008

Innovative in-car systems, Taking photos while driving

Wolfgang just sent me another picture (taken by a colleague of him) with more information in the head-up display. It shows a speed of 180 km/h and I wonder who took the picture. Usually only the driver can see such a display ;-)

For assistance, information and entertainment systems in cars (an I assume we could consider taking photos an entertainment task) there are guidelines [1, 2, 3] - an overview presentation in German can be found in [4]. Students in the Pervasive Computing class have to look at them and design a new information/assistance system that is context aware - perhaps photography in the car could be a theme... I am already curious about the results of the exercise.

[1] The European Statement of Principles (ESoP) on Human Machine Interface in Automotive Systems
[2] AAM Guidelines
[3] JAMA Japanese Guidelines
[4] Andreas Weimper, Harman International Industries, Neue EU Regelungen für Safety und Driver Distraction

(thanks to Wolfgang Spießl for sending the references to me)

Integration of Location into Photos, Tangible Interaction

Recently I came across a device that tracks the GPS position and has additionally a card reader (http://photofinder.atpinc.com/). If you plug in a card with photos it will integrate location data into the jpgs using time as common reference.

It is a further interesting example where software moves away from the generic computer/PC (where such programs that use a GPS track an combine it with photos are available, e.g. GPS photo linker) into a appliance and hence the usage complexity (on principle, did not try it out so far this specific device so far) can be massively reduced and the usability can be increased. See the simple analysis:

Tangible Interaction using the appliance:
  • buying the device
  • plug-in a card
  • wait till it is ready
vs.

GUI Interaction:
  • starting a PC
  • buy/download the application
  • install the application
  • finding an application
  • locating the images in a folder
  • locating the GPS track in a folder
  • wait till it is ready

.. could become one of my future examples where tangible UIs work ;-)

Wednesday, 2 July 2008

Wolfgang Spießl introduces context-aware car systems

Wolfgang visited us for 3 days and we talked a lot about context-awareness in the automotive domain. Given the sensors included in the cars and some recent ideas on context-fusion it seems feasible that in the near future context-aware assistance and information systems will get new functionality. Since finishing my PhD dissertation [1] there has been a move towards two directions: context predication and communities as source for context. One example of a community based approach is http://www.iyouit.eu which evolved out of ContextWatcher /IST-Mobilife.

In his lecture he showed many examples how pervasive computing happens in the car already now. After the talk we had the chance see and discuss user interface elements in current cars – in particular the head up display. Wolfgang gave demonstration of the CAN bus signals related to interaction with the car that are available to create context-aware applications. The car head-up display (which appears as being just in front of the car) create discussions on interesting use cases for these types of displays - beyond navigation and essential driving information.

In the lecture questions about how feasible / easy it is to do your own developments using the UI elements in the car – basically how I can run my applications in the car. This is not yet really supported ;-) However I had a previous post [2] where I argue that this is probably to come… and I still see this trend... It may be an interesting though how one can provide third parties access to UI components in the car without giving away control...