Showing posts with label tei. Show all posts
Showing posts with label tei. Show all posts

Tuesday, 25 January 2011

TEI2011 Keynote: Bounce Back by Gilian Crampton Smith

Gilian Crampton Smith is one of the people that massively influenced the field of interaction design from the very beginning. She is currently working in Venice: www.interaction-venice.com. Nearly 20 years back in London Durrell Bishop was one of her students. She pointed out that the central insight of the early work was "We know how to control objects by physical means". This insight has since become a basic motivation for tangible interfaces and embodied interaction. Closely linked to this is the complexity of interaction. To me one great point with designing tangible user interfaces is that you are forced to come up with a simple solution. Overall the argument was that embodied interaction works because it draws on knowledge we have. An example is that two physical things cannot be in exactly the same place and another one is that things stay where they are if there is no force moving them. There are clear limitations to the interaction with physical objects that give indications how to use it; she referenced a paper on an exploration of physical manipulation [1].

As one example Gilian showed the marble answering machine concept video from 1992. To me this video is a great example that shows the power of a concept video. Using a simple animation in this concept video enables viewers to understand the idea and interactions very quickly. I like the simplicity and clarity of this video - it would be very useful for teaching. Hopefully it will made publicly available; see below the bad-quality copy I took with my mobile. A good quality version ist at http://vimeo.com/19930744

One interesting point was a reflection on the shortcomings of the traditional view of interaction being based on input and output. Gilian argued that output should be separated in feedback and results. Input is categorized in according the interaction required, such as simple (e.g. text, minimal movement), medium (e.g. GUI), maximum (e.g. musical instrument, movement of the whole body). The feedback is related to the sense human have, and includes modalities such as visual, auditory, tangible, kinesthetic, proprioception. Results are related to feedback but are oriented on the outcome. Results may be visual (e.g. symbolic, words, icon, films); auditory (e.g. sounds, words or music) or physical such as touch (e.g. massage machine) or movement.

Gilian went on to the question: "Why aren't tangible, embedded and embodied interfaces out there in our everyday world?" For me the most important answer she gave is that it is hard to do them. It is very difficult to created sensor based interaction in a way that users understand it and that it feels natural to use. Counter examples where people got it wrong are those many lights that go on when you are already half way up the stairs or the automatic doors where people wave their hands in order to open them. As much of the interaction technology in such systems is not visible people have a hard time figuring out how things work. Based on their (incomplete) experience base on a few interactions they will create their mental modal … which is often wrong and potentially leads to frustration. A further point she made is that creating physical interactive objects and things is difficult and involves a lot of different skills. In combination with the function it is difficult to get the aesthetics right. One very good example is how hard it is to the aesthetics of feel is right, as it includes a well balanced design taking into account touch, weight, balance, movement, and sound. Overall this requires a great passion for detail to get the quality of engineering right; Gilians examples for this was the AUDI advert where an Engineer opens and closes the car door and listens carefully. I think I have seen in 8 years back in the UK as part of the ad-campain "Vorsprung durch Technik". If anyone has a link to a copy please post it.

When I was in 2002 (or around this time) visiting her at Ivrea near Torino I was very much impressed by the creativity and inventiveness. In her talk she explained one approach to teaching I really like. The students get a device (e.g. an alarm clock, answering machine) without manual with the task to figure out how it works. While exploring the functionality they film themselves. Based on this they reflect on the design and then go on to do a completely new design for a device with a similar functionality (with the constraint that it cannot have buttons).

I like a reference to something Bruce Steering wrote. In short it basically says research is like crime; you need Means, Motivation and Opportunity.

Some further example she showed:
  • A communication device based candle: ceramic liaison. It is a bi-directional 1-bit communication device, that is esthetically pleasing. The state of the candle (real wax with fire) on either side is reflected by lighting up some ceramics on the other side.
  • A full body experience interaction device for controlling games: Collawobble. Using two bouncing balls a packman game can be controlled. One user controls with the bouncing X and the other Y.
update:
Ellen Do posted 2 clips that were shown in the keynote and suggested a further one: http://www.youtube.com/watch?v=oS6pwQqSY70

the marble answering machine video on http://vimeo.com/19930744

[1] Andrew Manches, Claire O'Malley, and Steve Benford. 2009. Physical manipulation: evaluating the potential for tangible designs. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI '09). ACM, New York, NY, USA, 77-84. DOI=10.1145/1517664.1517688 http://doi.acm.org/10.1145/1517664.1517688

TEI Studio, making devices with Microsoft Gadgeteer

Nic Villar and James Scott from Microsoft Research in Cambridge, UK, run a Studio on building interactive devices at TEI 2011 in Madeira. Studio's are mixture of tutorials and hands-on workshops and there were very interesting ones at this year's conference - it was very hard to choose.

Microsoft Gadgeteer is a modular platform for creating devices. It includes a main board, displays, a camera, sensors, interaction elements, motors and servos, memory board, and USB-connectors as well as power supply options. It is programmable in C# using Visual Studio. The development involves the connecting up of the hardware and the writing of the software. A "hello world" example is a simple digital camera. You connect up the main board with the camera, a display, and a button. Writing less than 20 lines of code you implement the functionality of the camera. It follows an asynchronous approach. When the buttons is pressed the camera is instructed to take a picture. An event handle is registered that is called when the camera is ready capturing the image. And in the code in the handler you take the image and show it on the display. 10 years back we worked on a European project Smart-Its [1] and started with basic electronic building blocks and the idea of open source hardware where people could create their devices. Looking now at a platform like Gadgeteer one can see that there has been great progress in Ubicomp research over that time. It is great to see how Nic Villar has been pursuing this idea from the time he did his BSc thesis with Smart-Its in our lab in Lancaster, with Pin-and-Play/Voodoo I/O [2] in his PhD work, to the platform he is currently working on and the vision beyond [3].

It was amazing how quickly all groups got this examples working. For me, with programming experience - but no real experience in C# and with little practice in programming over the last few years - it was surprising how quickly I got into programming again. To me this was mainly due to the integration with visual studio - especially the suggestions made by the IDE were very helpful. James said the design rational for the Visual Studio integration was "you should never see just a blinking cursor" - and I think this was well executed.

All groups made over the day a design. I liked the "robot-face" that showed minimal emotions… basically if you come to close you can see it getting unhappy. My mini-project was a camera that use an ultrasonic range sensor and a larger display. It takes photos when someone comes close and shows the last 8 photos on the display - overwriting the first one when it comes to the end of the screen. Interested in how little code is required to do this? Check out the source code I wrote.


Two more studios I would have loved to attend: Amanda run a Studio on creating novel (and potentially bizarre) game controllers and Daniela offered a Studio to explore what happens if bookbinding meets electronics.

[1] Holmquist, L. E., Gellersen, H., Kortuem, G., Schmidt, A., Strohbach, M., Antifakos, S., Michahelles, F., Schiele, B., Beigl, M., and Mazé, R. 2004. Building Intelligent Environments with Smart-Its. IEEE Comput. Graph. Appl. 24, 1 (Jan. 2004), 56-64. DOI= http://dx.doi.org/10.1109/MCG.2004.1255810

[2] Van Laerhoven, K., Villar, N., Schmidt, A., Gellersen, H., Håkansson, M., Holmquist, L. E. 2003. In-Home Networking - Pin&Play: The Surface as Network Medium, IEEE Communications Magazine, vol. 41, no. 4, April 2003.

[3] Steve Hodges and Nicolas Villar, The Hardware Is Not a Given, in IEEE Computer, IEEE, August 2010

Wednesday, 20 February 2008

Paul presented a paper on mobile phone interaction at TEI’08

Paul Holleis presented at TEI’08 the results of the research he did during his internship at Nokia Research [1]. Over the summer Paul spent 3 month with Jonna Häkkila’s group in Helsinki, where he worked on two projects: combing touch and key-presses and wearable controls.

Technically both projects used capacitative sensing to recognize touch. In his paper “Studying Applications for Touch-Enabled Mobile Phone Keypads” [1] they added to common mobile phone buttons touch sensing so that multiple levels of interaction can be measured, such as approaching, touching and pressing. The paper additionally discusses new interaction possibility that arise from this.

[1] Paul Holleis, Jonna Häkkilä, Jussi Huhtala. Studying Applications for Touch-Enabled Mobile Phone Keypads. Proceedings of the 2nd Tangible and Embedded Interaction Conference TEI’08. February 2008.

Monday, 18 February 2008

Keynote by Prof. Ishii at TEI’08

In the evening Prof. Hiroshi Ishii from the MIT Medialab presented a fascinating keynote at TEI’08. He gave an exciting overview of his work in tangible user interfaces, starting from tangible bits [1]. Right after the demos it was impressive to see how much impact he had on this area of research. He has a paper that accompanies the keynote in the proceedings, that will be soon available in the ACM DL.

On central piece of advice on research was to work on visions rather than on applications. He argues visions may last 100 years and applications are likely to be gone after 10. However he made the interesting connection between the two. You need to have applications to convey and communicate the visions, but you need to have the vision to create the applications. He had a great a slide (which indicates that we will go to heaven) to motivate to do something to be remembered in 200 years – not sure if this is my plan.

He criticized interdisciplinary research as we do it at the moment. In his few the most efficient way for interdisciplinary way is to make a single person knowledgeable in several fields. This raises issues in education and in the discussion afterwards there was the question whether this is feasible beyond the MIT or not.

[1] Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, United States, March 22 - 27, 1997). S. Pemberton, Ed. CHI '97. ACM, New York, NY, 234-241. DOI= http://doi.acm.org/10.1145/258549.258715

Talks, Demos and Poster at TEI’08

The first day of the conference went well – thanks to many helping hands. The time we had for the demos seemed really long in the program but was too short to engage with every exhibit (for next year we should make sure to allocate even more time).

People made really last minutes efforts to make their demos work. We even went to Conrad electronics to get some pieces (which burned before in setting up the demos). Demos are in my eyes an extremely efficient means for communicating between scientists and sharing ideas.

Visit to the Arithmeum in Bonn

For people who already arrive on Sunday, the day before the conference, we organised some museum visits: Arithmeum, Haus der Geschichte, Deutsches Museum, and Art Gallery. I only had time to see the Arithmeum (http://www.arithmeum.uni-bonn.de/) which was pretty impressive. Hiroshi Ishii (the keynote speaker of the conference) and Brygg Ullmer (last years conference co-chair) joined us, too.

It was unexpected how close the displayed artefacts are to our current research on tangible interaction. We had a very good guided tour by Nina Mertens, who gave us an interesting overview from counting tokens to calculation machines. Some of the exhibit we could even try out our selves.

I found the aspect of aesthetics in some of the calculation aids and machines quite fascinating. Especially the fact that some were so precious that they were not really used for calculating but more for showing off is a concept that is amazing. Similarly interesting was one artefact that was mainly built as a proof that calculation can be automated.

Sunday, 17 February 2008

TEI08 Onside Preparation

One of the fun parts of organizing a conference is to work with a team of student volunteers. It is always amazing how quick many hands can work! Today we met with some of the student volunteers to pack the bags and set up the poster and exhibition space. (Unexpectedly) we are well in time.


Friday, 15 February 2008

Press-releases for TEI’08 - explaining the idea

We have two press releases to announce TEI’08 – the second international conference on tangible and embedded interaction (in German only).

The first one is a general announcement with the invitation to the press conference: Internationale Konferenz zu neuen Möglichkeiten der Mensch-Maschine-Interaktion

The second one is explaining – in non-scientific terms – the idea of tangible and embedded interaction: Der Wetterfrosch im Regenschirm

USB-Sticks, Bags and T-shirts have arrived!

Before conferences time is always exiting – will things arrive in time or not…

This time we are lucky: T-shirts for the SV, USB-Sticks for the e-proceedings, and bags for the participants are here :-)

Friday, 14 December 2007

TEI’08 online registration is open now!

The online registration for the 2nd international conference tangible and embedded interaction is now open. The early registration deadline is January 8th 2008. There is also the list of accepted papers and travel page online.

We have a really cool cover – it is not final, but I could not resist to give a preview (see above). Bart Hengeveld did a really good job! I am looking forward to holding the proceedings in my hand.

Thursday, 29 November 2007

Keynote speaker at TEI’08: Prof. Hiroshi Ishii

Prof. Hiroshi Ishii from the MIT Media Laboratory, kindly accepted our invitation to be the keynote speaker of TEI’08 in Bonn. We are absolutely delighted that he will come to the conference. Looking back at last year's proceedings of TEI, and seeing the references in the papers, it is obvious how much he has inspired and shaped this research field.

I recently learned that Prof. Ishii has lived and worked in Bonn in 1987-1988 at GMD (which became later Fraunhofer. He was then a Post-Doc and worked topics related to CSCW.

There are so many paper of tangibles media group one really has to read. If you have today little time watch this one: topodo.

Friday, 16 February 2007

Affectionate Computing

Thecla Schiphorst introduced us in her talk “PillowTalk: Can We Afford Intimacy? to the concept of Affectionate-Computing.
The central question is really how can we create intimacy in communication an interaction with and through technology? The prototype showed networked soft objects, that include sensors that recognize tactile qualities and gesture interaction. There are more details in her paper published at TEI’07.

Thursday, 15 February 2007

Nicolas Villar & Wolfang Spießl present at TEI’07


Wolfgang (one of the students I supervise at the LMU Munich) was last year for an internship at Lancaster University. Together with Nicolas (one of my former colleagues) he build a system that integrates VoodooIO with Macromedia Flash.

The paper is available at http://ubicomp.lancs.ac.uk/~villar/publications/pdf/voodooflash_tei.pdf

Keynote at TEI’07 by Tom Rodden

Tom presented (in socks) a very inspiring keynote at TEI'07. He questioned if the notion of seamless integration of technology based on several examples from the Equator Project (www.equator.ac.uk).

A central lesson from his talk for me is to look more closely how to design interactive systems so that people can exploit the technical weakness of system creatively. We will always have to deal with sensors systems, context-recognition, and learning algorithms that are not 100% perfect. I find it interesting to see this rather as a resource for design than a problem. The experience Tom reported from CYSMN (http://www.equator.ac.uk/index.php/articles/618) show nicely how people make use of GPS inaccuracies in a game.

A further point to keep in mind is that when triggering events based on context you may get boundary effects that can break the user experience. Tom gave an example of children finding virtual animals based on location. The effect was that they stopped when the saw an animal on their device – and this was at the boundary of the trigger area. This led to cases where the animal appeared and disappeared on the device and the children were puzzled about this effect. Hence one should really be careful how to put triggers – and I would expect that this is generally applicable to context-awareness not just to location-aware application.

Sunday, 4 February 2007

Tangible and Embedded Interaction 2007




Just a little more than week before the Tangible and Embedded Interaction 2007 starts in Baton Rouge (http://www.tei-conf.org/).



The program looks really exciting - a mixture of computer science, HCI, design, and art. I would expect that the conference sparks a lot of new ideas. Brygg Ullmer did the cover for the proceedings and it looks really cool.


Having seen the program of TEI'07 we have decided to put in a proposal to run the conference next year in Bonn. Hope we get it...