Wednesday, 26 January 2011

Meeting the Maker of littleBits due to Canceled Flights

I got up this morning before 4am… that is really early. Arriving at the airport I found out that the 5:40 flight was canceled. I should have checked the flight status before getting out of bed (or TAP/Lufthansa could have sent me a SMS). It was not a complete surprise as yesterday someone mentioned that due to the strong rain some flights did not arrive in Funchal.

I was rebooked on a later flight and met by chance Ayah Bdeir, who was at TEI and on the same flight to connect vi FRA. At the demo session I saw her product: littleBits (http://littlebits.cc) but did not talk to her.

littleBits is a set of different hardware modules that can be connected by small magnet. On the website it is described as
"littleBits is an opensource library of discrete electronic components pre-assembled in tiny circuit boards. Just as Legos allow you to create complex structures with very little engineering knowledge, littleBits are simple, intuitive, space-sensitive blocks that make prototyping with sophisticated electronics a matter of snapping small magnets together."
The approach is very interesting and shows one trend that is currently ongoing in the TEI community: enabling a wider set of people to create functional interactive products by lowering the barrier and effort for prototyping systems that consist of physical components, electronics, and software. The target audience ranges from designers to artists, but also includes the education domain. A point that makes littleBits interesting is that they will be open source. They are not yet available, but one can subscribe for further information on the website: http://littlebits.cc there are also some cool videos...

At TEI it was great to see so many toolkits and platforms appearing. When we did our first open source hardware/software platform in 2002 with Smart-Its we followed a DIY-approach for building the hardware (having a step by step photo tutorial to building the modules) and having a lot of options for programming it. Additionally each module had wireless included. At the time we consider this a good idea - we thought it will empower the developers. Looking back this limited the number of people that would be able use it - but allowed the one's how made the effort to do very forward looking prototypes. We summarized some of the experience in [1]. The current research and products seen at TEI focus much stronger on making it easy for a large set of people to use it - it will be exciting to see if and how this speeds up the creation of new products in the next years.

[1] Gellersen, H., Kortuem, G., Schmidt, A., and Beigl, M. 2004. Physical Prototyping with Smart-Its. IEEE Pervasive Computing 3, 3 (Jul. 2004), 74-82. DOI= http://dx.doi.org/10.1109/MPRV.2004.1321032

Tuesday, 25 January 2011

TEI2011 Keynote: Bounce Back by Gilian Crampton Smith

Gilian Crampton Smith is one of the people that massively influenced the field of interaction design from the very beginning. She is currently working in Venice: www.interaction-venice.com. Nearly 20 years back in London Durrell Bishop was one of her students. She pointed out that the central insight of the early work was "We know how to control objects by physical means". This insight has since become a basic motivation for tangible interfaces and embodied interaction. Closely linked to this is the complexity of interaction. To me one great point with designing tangible user interfaces is that you are forced to come up with a simple solution. Overall the argument was that embodied interaction works because it draws on knowledge we have. An example is that two physical things cannot be in exactly the same place and another one is that things stay where they are if there is no force moving them. There are clear limitations to the interaction with physical objects that give indications how to use it; she referenced a paper on an exploration of physical manipulation [1].

As one example Gilian showed the marble answering machine concept video from 1992. To me this video is a great example that shows the power of a concept video. Using a simple animation in this concept video enables viewers to understand the idea and interactions very quickly. I like the simplicity and clarity of this video - it would be very useful for teaching. Hopefully it will made publicly available; see below the bad-quality copy I took with my mobile. A good quality version ist at http://vimeo.com/19930744

One interesting point was a reflection on the shortcomings of the traditional view of interaction being based on input and output. Gilian argued that output should be separated in feedback and results. Input is categorized in according the interaction required, such as simple (e.g. text, minimal movement), medium (e.g. GUI), maximum (e.g. musical instrument, movement of the whole body). The feedback is related to the sense human have, and includes modalities such as visual, auditory, tangible, kinesthetic, proprioception. Results are related to feedback but are oriented on the outcome. Results may be visual (e.g. symbolic, words, icon, films); auditory (e.g. sounds, words or music) or physical such as touch (e.g. massage machine) or movement.

Gilian went on to the question: "Why aren't tangible, embedded and embodied interfaces out there in our everyday world?" For me the most important answer she gave is that it is hard to do them. It is very difficult to created sensor based interaction in a way that users understand it and that it feels natural to use. Counter examples where people got it wrong are those many lights that go on when you are already half way up the stairs or the automatic doors where people wave their hands in order to open them. As much of the interaction technology in such systems is not visible people have a hard time figuring out how things work. Based on their (incomplete) experience base on a few interactions they will create their mental modal … which is often wrong and potentially leads to frustration. A further point she made is that creating physical interactive objects and things is difficult and involves a lot of different skills. In combination with the function it is difficult to get the aesthetics right. One very good example is how hard it is to the aesthetics of feel is right, as it includes a well balanced design taking into account touch, weight, balance, movement, and sound. Overall this requires a great passion for detail to get the quality of engineering right; Gilians examples for this was the AUDI advert where an Engineer opens and closes the car door and listens carefully. I think I have seen in 8 years back in the UK as part of the ad-campain "Vorsprung durch Technik". If anyone has a link to a copy please post it.

When I was in 2002 (or around this time) visiting her at Ivrea near Torino I was very much impressed by the creativity and inventiveness. In her talk she explained one approach to teaching I really like. The students get a device (e.g. an alarm clock, answering machine) without manual with the task to figure out how it works. While exploring the functionality they film themselves. Based on this they reflect on the design and then go on to do a completely new design for a device with a similar functionality (with the constraint that it cannot have buttons).

I like a reference to something Bruce Steering wrote. In short it basically says research is like crime; you need Means, Motivation and Opportunity.

Some further example she showed:
  • A communication device based candle: ceramic liaison. It is a bi-directional 1-bit communication device, that is esthetically pleasing. The state of the candle (real wax with fire) on either side is reflected by lighting up some ceramics on the other side.
  • A full body experience interaction device for controlling games: Collawobble. Using two bouncing balls a packman game can be controlled. One user controls with the bouncing X and the other Y.
update:
Ellen Do posted 2 clips that were shown in the keynote and suggested a further one: http://www.youtube.com/watch?v=oS6pwQqSY70

the marble answering machine video on http://vimeo.com/19930744

[1] Andrew Manches, Claire O'Malley, and Steve Benford. 2009. Physical manipulation: evaluating the potential for tangible designs. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI '09). ACM, New York, NY, USA, 77-84. DOI=10.1145/1517664.1517688 http://doi.acm.org/10.1145/1517664.1517688

TEI Studio, making devices with Microsoft Gadgeteer

Nic Villar and James Scott from Microsoft Research in Cambridge, UK, run a Studio on building interactive devices at TEI 2011 in Madeira. Studio's are mixture of tutorials and hands-on workshops and there were very interesting ones at this year's conference - it was very hard to choose.

Microsoft Gadgeteer is a modular platform for creating devices. It includes a main board, displays, a camera, sensors, interaction elements, motors and servos, memory board, and USB-connectors as well as power supply options. It is programmable in C# using Visual Studio. The development involves the connecting up of the hardware and the writing of the software. A "hello world" example is a simple digital camera. You connect up the main board with the camera, a display, and a button. Writing less than 20 lines of code you implement the functionality of the camera. It follows an asynchronous approach. When the buttons is pressed the camera is instructed to take a picture. An event handle is registered that is called when the camera is ready capturing the image. And in the code in the handler you take the image and show it on the display. 10 years back we worked on a European project Smart-Its [1] and started with basic electronic building blocks and the idea of open source hardware where people could create their devices. Looking now at a platform like Gadgeteer one can see that there has been great progress in Ubicomp research over that time. It is great to see how Nic Villar has been pursuing this idea from the time he did his BSc thesis with Smart-Its in our lab in Lancaster, with Pin-and-Play/Voodoo I/O [2] in his PhD work, to the platform he is currently working on and the vision beyond [3].

It was amazing how quickly all groups got this examples working. For me, with programming experience - but no real experience in C# and with little practice in programming over the last few years - it was surprising how quickly I got into programming again. To me this was mainly due to the integration with visual studio - especially the suggestions made by the IDE were very helpful. James said the design rational for the Visual Studio integration was "you should never see just a blinking cursor" - and I think this was well executed.

All groups made over the day a design. I liked the "robot-face" that showed minimal emotions… basically if you come to close you can see it getting unhappy. My mini-project was a camera that use an ultrasonic range sensor and a larger display. It takes photos when someone comes close and shows the last 8 photos on the display - overwriting the first one when it comes to the end of the screen. Interested in how little code is required to do this? Check out the source code I wrote.


Two more studios I would have loved to attend: Amanda run a Studio on creating novel (and potentially bizarre) game controllers and Daniela offered a Studio to explore what happens if bookbinding meets electronics.

[1] Holmquist, L. E., Gellersen, H., Kortuem, G., Schmidt, A., Strohbach, M., Antifakos, S., Michahelles, F., Schiele, B., Beigl, M., and Mazé, R. 2004. Building Intelligent Environments with Smart-Its. IEEE Comput. Graph. Appl. 24, 1 (Jan. 2004), 56-64. DOI= http://dx.doi.org/10.1109/MCG.2004.1255810

[2] Van Laerhoven, K., Villar, N., Schmidt, A., Gellersen, H., Håkansson, M., Holmquist, L. E. 2003. In-Home Networking - Pin&Play: The Surface as Network Medium, IEEE Communications Magazine, vol. 41, no. 4, April 2003.

[3] Steve Hodges and Nicolas Villar, The Hardware Is Not a Given, in IEEE Computer, IEEE, August 2010

Monday, 24 January 2011

TEI growing up, Dinner with Hiroshi Ishii and Don Norman

This is now the 5th TEI conference and it has about 300 people attending. It is still a very young community, with many students attending, but also the "big names" are at the conference. When Brygg and I started with a small conference in 2007 (in Baton Rouge) and 2008 (in Bonn) we considered there is a need for tangible and embedded interaction work to find a venue but we did not expect that the community was growing so rapidly. Looking at the presentations and contributions show at this year's conference it is clear that the conference is growing up - without losing its exciting mix of contributions.

It is great to talk to Don Norman and Hiroshi Ishii over dinner and seeing them engaging with this young community. They both have made major contributions to this community and have inspired my personal research some 10 years back. If you have not done so I recommend to read some of their early contributions, such as Hiroshi's CHI1997 paper [1,2] and Don's invisible computing book [3,4]. Both have published and publish many interesting articles and books which are central to the HCI literature, check out their web pages: Hiroshi Ishii and Don Norman.

[1] Ishii, H. and Ullmer, B., "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms," Proceedings of Conference on Human Factors in Computing Systems (CHI '97), ACM, Atlanta, March 1997, pp. 234-241.

[2] Fitzmaurice, G., Ishii, H., Buxton, W., "Bricks: Laying the Foundations for Graspable User Interfaces," Proceedings of Conference on Human Factors in Computing Systems (CHI '95), ACM, Denver, May 1995, pp. 442-449.

[3] Donald Norman, The Invisible Computer. 1998, Cambridge MA, MIT Press

[4] Donald Norman, The design of Everyday Objects, 2002 Basic Books (Perseus)

Friday, 21 January 2011

pdnet meeting at the University of Minho in Portugal

Over the last day we got together in Portugal to discuss the progress in the EU-FET-OPEN project pdnet. We are now 9 month into the project and it is great to see the results appearing. Stay tune for upcoming papers. Besides other research we have done interesting observational work and have gained understanding what matters for people when setting up public displays and this informs our system design.

In summer we have the opportunity to try our system by deploying two applications in Oulu, Finland. With two applications proposals we were selected as finalist for UbiChallenge. In June we will go with our students to Oulu and implement our system and study the usage in a public deployment.

Food was amazing…

What alarm clock are you using?

The answer is probably "my phone" - it seems that for many people the phone has become their primary alarm clock. We have discussed this before in a blog post

Some years back I took part in a design competition at the appliance design conference and suggested an alarm clock that links you to your friends [1]. One of the ideas was to have dynamic wake up times based on when you friends got up. The paper was accepted in March 2005 - this was before twitter was founded and before facebook was open for general registration. At this time we envisioned this as a stand-alone appliance as micro-blogging was not yet around.

Time has move on and many appliance ideas have since become apps on the phone. In the course of his research Ali is working on ideas for increasing the connectedness between people. One of the case studies is now an alarm clock - called weSleep - that has the basic alarm clock function and has additionally means to log sleep hours and perceived sleep quality. It also allows to post information related to the going to sleep or being woken up to social networking software such as facebook.

Interesting in trying it out? Check out the web page of weSleep and if you are interested in taking part in a study please contact Ali (not sure if he still is interested in more volunteers).

[1] Schmidt, A. 2006. Network alarm clock (The 3AD International Design Competition). Personal Ubiquitous Comput. 10, 2-3 (Jan. 2006), 191-192. DOI=http://dx.doi.org/10.1007/s00779-005-0022-y

Tuesday, 18 January 2011

Moved to the University of Stuttgart

No posts for some time... the reason: I have changed jobs and this was eating up more time than expected.


Since December, 17th 2010 I am with the University of Stuttgart in the south of Germany - much closer to home :-) I have taken a position as professor in Human Computer Interaction in the Institute for Visualization and Interactive Systems. The position was created in the context of the SimTech cluster of excellence.

My new office address (in a brand new building) is:
Albrecht Schmidt
Pfaffenwaldring 5a
70569 Stuttgart
Germany

Email and web server are still in the making... but my regular acm email-address will be still valid.