Together Eva Eva Hornecker and Brygg Ullmer I was guest editor for a special issue of the International Journal of Arts and Technology (IJART) on the topic of Tangible and Embedded Interaction. The issue contains 6 papers that look at very different aspects of this topic.
Have a look at the editorial (it is openly accessible) for an overview of the papers in the issue.
Wednesday, 25 February 2009
Tuesday, 24 February 2009
Poster on mobile advertising displays at HotMobile 2009
We put together a poster discussing some of our recent work on mobile displays for HotMobile. While presenting the poster I got a number of interesting ideas and concerns. One idea is to widening the idea of advertsing and fuse it with traditional classify ads by private people (e.g. advertising a flat or telling the world that you lost your cat). The big question is really how to measure audince exposure and eventually conversion. There are several ideas how to do this - but looks more like another master project on the topic than a overnight hack ;-)
The abstract for the poster:
In recent years many conventional public displays were replaced by electronic displays hence enabling novel forms of advertising and information dissemination. This includes mainly stationary displays, e.g. in billboards and street furniture, and currently first mobile displays on cars appear. Yet, current approaches are mostly static since they neither do consider mobility and the context they are used in nor the context of the viewer. In our work we explore how mobile public displays, which rapidly change their own context, can gather and process information about their context. Data about location, time, weather, and people in the vicinity can be used to react accordingly by displaying related content such as information or advertisements.
When spending some time in Montain View I was suprised how few electronic screens I saw compared to Germany or Asia. But nevertheless they have their own ways of creating attention... see the video below :-)
Some time back in Munich we look at how interaction modalities can effect the attention of bystanders, see [1] for a short overview of the work.
[1] Paul Holleis, Enrico Rukzio, Friderike Otto, Albrecht Schmidt. Privacy and Curiosity in Mobile Interactions with Public Displays. Poster at CHI 2007 workshop on Mobile Spatial Interaction. San Jose, California, USA. 28 April 2007.
The abstract for the poster:
In recent years many conventional public displays were replaced by electronic displays hence enabling novel forms of advertising and information dissemination. This includes mainly stationary displays, e.g. in billboards and street furniture, and currently first mobile displays on cars appear. Yet, current approaches are mostly static since they neither do consider mobility and the context they are used in nor the context of the viewer. In our work we explore how mobile public displays, which rapidly change their own context, can gather and process information about their context. Data about location, time, weather, and people in the vicinity can be used to react accordingly by displaying related content such as information or advertisements.
When spending some time in Montain View I was suprised how few electronic screens I saw compared to Germany or Asia. But nevertheless they have their own ways of creating attention... see the video below :-)
Some time back in Munich we look at how interaction modalities can effect the attention of bystanders, see [1] for a short overview of the work.
[1] Paul Holleis, Enrico Rukzio, Friderike Otto, Albrecht Schmidt. Privacy and Curiosity in Mobile Interactions with Public Displays. Poster at CHI 2007 workshop on Mobile Spatial Interaction. San Jose, California, USA. 28 April 2007.
HotMobile09: history repeats - shopping assistance on mobile devices
Comparing prices and finding the cheapest item has been a favorite application example over the last 10 years. I have seen the idea of scanning product codes and compare them to prices in other shops (online or in the neighborhood) first demonstrated in 1999 at the HUC conference. The Pocket BargainFinder [1] was a mobile device with a barcode reader attached that you could scan books and get a online price comparison. Since then I have seen a number of examples that take this idea forward, e.g. a paper here at HotMobile [2] or the Amazon Mobile App.
The idea of making a bargain is certainly very attractive; however I think many of these applications do not take enough into account how price building works in the real world. If the consumer gets more power in comparison it can go two was: (1) shops will get more uniform in pricing or (2) shows will make it again harder to compare. The version (2) is more interesting ;-) and this can range from not allowing the use of mobile devices in the shop (what we see in some areas at the moment) to more sophisticated pricing options (e.g. prices get lowered when you buy combinations of products or when you are repeatedly in the same shop). I am really curious how this develops - would guess the system will penetrate the market over the next 3 years...
[1] Adam B. Brody and Edward J. Gottsman. Pocket BargainFinder: A Handheld Device for Augmented Commerce. First International Symposium on Handheld and Ubiquitous Computing (HUC '99), 27-29 September 1999, Karlsruhe, Germany
http://www.springerlink.com/content/jxtd2ybejypr2kfr/
[2] Linda Deng, Landon Cox. LiveCompare: Grocery Bargain Hunting Through Participatory Sensing. HotMobile 2009.
The idea of making a bargain is certainly very attractive; however I think many of these applications do not take enough into account how price building works in the real world. If the consumer gets more power in comparison it can go two was: (1) shops will get more uniform in pricing or (2) shows will make it again harder to compare. The version (2) is more interesting ;-) and this can range from not allowing the use of mobile devices in the shop (what we see in some areas at the moment) to more sophisticated pricing options (e.g. prices get lowered when you buy combinations of products or when you are repeatedly in the same shop). I am really curious how this develops - would guess the system will penetrate the market over the next 3 years...
[1] Adam B. Brody and Edward J. Gottsman. Pocket BargainFinder: A Handheld Device for Augmented Commerce. First International Symposium on Handheld and Ubiquitous Computing (HUC '99), 27-29 September 1999, Karlsruhe, Germany
http://www.springerlink.com/content/jxtd2ybejypr2kfr/
[2] Linda Deng, Landon Cox. LiveCompare: Grocery Bargain Hunting Through Participatory Sensing. HotMobile 2009.
Bob Iannucci from Nokia presents Keynote at HotMobile 2009
Bob Iannucci from Nokia presented his keynote "ubiquitous structured data: the cloud as a semantic platform" at HotMobile 2009 in Santa Cruz. He started out with the statement that "Mobility is at the beginning" and he argued that why mobile systems will get more and more important.
He presented several principles for mobile devices/systems
In the keynote he offered an alternative conceptual model: Humans are Relational. Model everything as relations between people, things and places. He moved on to the question what are the central shortcomings in current mobile systems/mobile phones and he suggested it comes down to (1) no common data structure and (2) no common interaction concept.
With regard to interaction concepts he argued that a Noun-Verb style interaction is natural and easy for people to understand (have heard this before, for a discussion about it in [1, p59]). The basic idea in this mode is to choose a noun (e.g. people, place, thing) and then decide what to do with it (verb). From his point of view this interaction concept fits well the mobile device world. He argued that a social graph (basically relationships as in facebook etc.) would be well suited for a noun-verb style interaction. The nodes in the graph (e.g. people, photos, locations, etc.) are nouns and transformations (actions) between the nodes are the verbs. He suggested if we represent all the information that people have now in the phone as a graph and we have an open standard (and infrastructure) to share we could create a universal platform for mobile computing. (and potentially a huge graph with all the information in the world ;-)
I liked his brief comment on privacy: "many privacy problems can be reduced to economic problems". Basically people give their information away if there is value. And personally I think in most cases people give it away even for a minimal value... So far we have no market place where people can sell their information. He mentioned the example of a personal travel data which can provide the basis for traffic information (if aggregated). I think this is an interesting direction - how much value would have my motion pattern have?
Somehow related to what you can do on a mobile phone he shared with us the notion of the "3-Watt limit". This seems fundamental: you cannot have more than 3 Watt used up in a device that fits in your hand (typical phone size) as otherwise it would get to hot. So the processing power limitation is not on the battery, but on the heat generated.
[1] Jef Raskin. The Humane Interface. Addison-Wesley. 2000.
He presented several principles for mobile devices/systems
- Simplicity and fitness for purpose are more important than feature
- use concepts must remain constant - as few concepts as possible
- presentations (what we see) and input modalities will evolve
- standards will push the markets
In the keynote he offered an alternative conceptual model: Humans are Relational. Model everything as relations between people, things and places. He moved on to the question what are the central shortcomings in current mobile systems/mobile phones and he suggested it comes down to (1) no common data structure and (2) no common interaction concept.
With regard to interaction concepts he argued that a Noun-Verb style interaction is natural and easy for people to understand (have heard this before, for a discussion about it in [1, p59]). The basic idea in this mode is to choose a noun (e.g. people, place, thing) and then decide what to do with it (verb). From his point of view this interaction concept fits well the mobile device world. He argued that a social graph (basically relationships as in facebook etc.) would be well suited for a noun-verb style interaction. The nodes in the graph (e.g. people, photos, locations, etc.) are nouns and transformations (actions) between the nodes are the verbs. He suggested if we represent all the information that people have now in the phone as a graph and we have an open standard (and infrastructure) to share we could create a universal platform for mobile computing. (and potentially a huge graph with all the information in the world ;-)
I liked his brief comment on privacy: "many privacy problems can be reduced to economic problems". Basically people give their information away if there is value. And personally I think in most cases people give it away even for a minimal value... So far we have no market place where people can sell their information. He mentioned the example of a personal travel data which can provide the basis for traffic information (if aggregated). I think this is an interesting direction - how much value would have my motion pattern have?
Somehow related to what you can do on a mobile phone he shared with us the notion of the "3-Watt limit". This seems fundamental: you cannot have more than 3 Watt used up in a device that fits in your hand (typical phone size) as otherwise it would get to hot. So the processing power limitation is not on the battery, but on the heat generated.
[1] Jef Raskin. The Humane Interface. Addison-Wesley. 2000.
Wednesday, 18 February 2009
Andreas Riener defends his PhD in Linz
After a stop-over in Stansted/Cambridge at the TEI conference I was today in Linz, Austria, as external for the PhD defense of Andreas Riener. He did his PhD with Alois Ferscha and worked on implicit interaction in the car. The set and size of experiments he did is impressive and he has two central results. (1) using tactile output in the car can really improve the car to driver communication and reduce reaction time. And (2) by sensing the force pattern a body creates on the seat driving relates activities can be detected and to some extend driver identification can be performed. For more details it makes sense to have a look into the thesis ;-) If you mail Andreas he will probably sent you the PDF...
One of the basic assumptions of the work was to use implicit interaction (on input and output) to lower the cognitive load while driving - which is defiantly a valid approach. Recently however we also discussed more the issues that arise when the cognitive load for drivers is to low (e.g. due to assistive systems in the car such as ACC and lane keeping assistance). There is an interesting phenomenon, the Yerkes-Dobson Law (see [1]), that provides the foundation for this. Basically as the car provides more sophisticated functionality and requires less attention of the user the risk increase as the basic activation of the driver is lower. Here I think looking into multimodality to activate the user more quickly in situations where the driver is required to take over responsibility could be interesting - perhaps we find a student interested in this topic.
[1] http://en.wikipedia.org/wiki/Yerkes-Dodson_law (there is a link to the 1908 publication by Yerkes, & Dodson)
Demo day at TEI in Cambridge
What is a simple and cheap way to get from Saarbrücken to Linz? It's not really obvious, but going via Stansted/Cambridge makes sense - especially when there is the conference on Tangible and Embedded Interaction (www.tei-conf.org) and Raynair offers 10€ flight (not sure about sustainability though). Sustainability, from a different perspective was also at the center of the Monday Keynote by Tom Igeo which I missed.
Nicolas and Sharam did a great job and the choice to do a full day of demos worked out great. The large set of interactive demos presented captures and communicates a lot of the spirit of the community. To get an overview of the demos one has to read through the proceedings (will post a link as soon as they are online in the ACM-DL) as there are too many to discuss them here.
Nevertheless here is my random pick:
One big topic is tangible interaction on surfaces. Several examples showed how interactive surfaces can be combined with physical artifacts to make interaction more graspable. Jan Borcher's group showed a table with passive controls that are recognized when placed on the table and they provide tangible means for interaction (e.g. keyboard keys, knobs, etc.). An interesting effect is that the labeling of the controls can be done dynamically.
Microsoft research showed an impressive novel table top display that allows two images to be projected - on the interactive surface and one on the objects above [1]. It was presented at large year's UIST but I have tried it out now for the first time - and it is a stunning effect. Have a look at the paper (and before you read the details make a guess how it is implemented - at the demo most people guessed wrong ;-)
Embedding sensing into artifacts to create a digital representation has always been a topic in tangible - even back to the early work of Hiroshi Ishii on Triangles [2]. One interesting example in this year's demo was a set of cardboard pieces that are held together by hinges. Each hinge is technically realized as a potentiometer and by measuring the potion the structure can be determined. It is really interesting to think this further.
Conferences like TEI let you inevitably think about the feasibility of programmable matter - and there is ongoing work in this in the robotics community. The idea is to create micro-robots that can create arbitrary shapes - for a starting point see the work at CMU on Claytronics.
[1] Izadi, S., Hodges, S., Taylor, S., Rosenfeld, D., Villar, N., Butler, A., and Westhues, J. 2008. Going beyond the display: a surface technology with an electronically switchable diffuser. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 - 22, 2008). UIST '08. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1449715.1449760
[2] Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 - 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56. DOI= http://doi.acm.org/10.1145/274644.274652
Monday, 16 February 2009
Voice interaction - Perhaps it works ...
Today we visited Christian Müller at DFKI in Saarbrücken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces.
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman's Golden Rules or in Norman's action cycle. In particular the following points are most often not well realized in voice-only system:
- State of the system is always visible
- Interactions with the system provide immediate and appropriate feedback
- Actions are easily reversible
- Opportunities for interaction are always visible
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines - perhaps we should also not longer ignore speech interaction in our projects ;-)
Sunday, 15 February 2009
Design Ideas and Demos at FH Potsdam
During the workshop last week in Potsdam we got to see demos from students of Design of Physical and Virtual Interfaces class taught by Reto Wettach and JennyLC Chowdhury. The students had to design a working prototype of an interactive system. As base technology most of them use the Arduino Board with some custom made extensions. For a set of pictures see my photo gallery and the photos on flickr. It would need pages to describe all of the projects so I picked few...
The project “Navel” (by Juan Avellanosa, Florian Schulz and Michael Härtel) is a belt with tactile output, similar to [1], [2] and [3]. The first idea along this lines that I have tried out was Gentle Guide [4] at mobile HCI 2003 – it seemed quite compelling. The student project proposed one novel application idea: to use it in sport. That is quite interesting and could complement ideas proposed in [5].
Vivien's favorite was the vibrating doormat; a system where a foot mat is constructed of three vibrating tiles that can be controlled and different vibration patters can be presented. It was built by Lionel Michel and he has several ideas what research questions this could address. I found especially the question if and how one can induce feelings and emotions with such a system. In the same application context (doormat) another prototype looked at emotions, too. If you stroke or pat this mat it comes out of its hiding place (Roll-o-mat by Bastian Schulz).
There were several projects on giving everyday objects more personality (e.g. a Talking Trashbin by Gerd-Hinnerk Winck) and making them emotional reactive (e.g. lights that reacted to proximity). Firefly (by Marc Tiedemann) is one example how reactiveness and motion that is hard to predict can lead to an interesting user experience. The movement appears really similar to a real firefly.
Embedding Information has been an important topic in our research over the last years [6] - the demos provided several interesting examples: a cable that visualized energy consumption and keyboard to leave messages. I learned a further example of an idea/patent application where information is included in the object – in this case in a tea bag. This is an extreme case but I think looking into the future (and assuming that we get sustainable and bio-degradable electronics) it indicates an interesting direction and pushing the idea of Information at your fingertip (Bill Gates Keynote in 1994) much further than originally intended.
For more photos see my photo gallery and the photos on flickr.
[1] Tsukada, K. and Yasumrua, M.: ActiveBelt: Belt-type Wearable Tactile Display for Directional Navigation, Proceedings of UbiComp2004, Springer LNCS3205, pp.384-399 (2004).
[2] Alois Ferscha et al. Vibro-Tactile Space-Awareness . Video Paper, adjunct proceedings of Ubicomp2008. Paper. Video.
[3] Heuten, W., Henze, N., Boll, S., and Pielot, M. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (Lund, Sweden, October 20 - 22, 2008). NordiCHI '08, vol. 358. ACM, New York, NY, 172-181. DOI= http://doi.acm.org/10.1145/1463160.1463179
[4] S.Bosman, B.Groenendaal, J.W.Findlater, T.Visser, M.de Graaf & P.Markopoulos . GentleGuide: An exploration of haptic output for indoors pedestrian guidance . Mobile HCI 2003.
[5] Mitchell Page, Andrew Vande Moere: Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness. Pervasive 2007. 91-108
[6] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop 'Ubiquitous Display Environments', September 2004
The project “Navel” (by Juan Avellanosa, Florian Schulz and Michael Härtel) is a belt with tactile output, similar to [1], [2] and [3]. The first idea along this lines that I have tried out was Gentle Guide [4] at mobile HCI 2003 – it seemed quite compelling. The student project proposed one novel application idea: to use it in sport. That is quite interesting and could complement ideas proposed in [5].
Vivien's favorite was the vibrating doormat; a system where a foot mat is constructed of three vibrating tiles that can be controlled and different vibration patters can be presented. It was built by Lionel Michel and he has several ideas what research questions this could address. I found especially the question if and how one can induce feelings and emotions with such a system. In the same application context (doormat) another prototype looked at emotions, too. If you stroke or pat this mat it comes out of its hiding place (Roll-o-mat by Bastian Schulz).
There were several projects on giving everyday objects more personality (e.g. a Talking Trashbin by Gerd-Hinnerk Winck) and making them emotional reactive (e.g. lights that reacted to proximity). Firefly (by Marc Tiedemann) is one example how reactiveness and motion that is hard to predict can lead to an interesting user experience. The movement appears really similar to a real firefly.
Embedding Information has been an important topic in our research over the last years [6] - the demos provided several interesting examples: a cable that visualized energy consumption and keyboard to leave messages. I learned a further example of an idea/patent application where information is included in the object – in this case in a tea bag. This is an extreme case but I think looking into the future (and assuming that we get sustainable and bio-degradable electronics) it indicates an interesting direction and pushing the idea of Information at your fingertip (Bill Gates Keynote in 1994) much further than originally intended.
For more photos see my photo gallery and the photos on flickr.
[1] Tsukada, K. and Yasumrua, M.: ActiveBelt: Belt-type Wearable Tactile Display for Directional Navigation, Proceedings of UbiComp2004, Springer LNCS3205, pp.384-399 (2004).
[2] Alois Ferscha et al. Vibro-Tactile Space-Awareness . Video Paper, adjunct proceedings of Ubicomp2008. Paper. Video.
[3] Heuten, W., Henze, N., Boll, S., and Pielot, M. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (Lund, Sweden, October 20 - 22, 2008). NordiCHI '08, vol. 358. ACM, New York, NY, 172-181. DOI= http://doi.acm.org/10.1145/1463160.1463179
[4] S.Bosman, B.Groenendaal, J.W.Findlater, T.Visser, M.de Graaf & P.Markopoulos . GentleGuide: An exploration of haptic output for indoors pedestrian guidance . Mobile HCI 2003.
[5] Mitchell Page, Andrew Vande Moere: Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness. Pervasive 2007. 91-108
[6] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop 'Ubiquitous Display Environments', September 2004
Wednesday, 11 February 2009
Towards interaction that is begreifbar
Since last year we have in Germany a working group on graspable/tangible interaction in mixed realities.
In German the key term we use is “begreifbar” or “begreifen” which has the meaning of acquire a deep understanding of something and the words basic meaning is to touch. Basically understand by touching – but in a more fundamental sense than grasping or getting grip. Hence the list of translations for “begreifen” given in the dictionary is quite long.
Perhaps we should push more for the word in the international community – Towards interaction that is begreifbar (English has too few foreign terms anyway ;-)
This meeting was organized by Reto Wettach at Potsdam and the objective was to have two days to invent things together. The mix of people mainly included people from computer science and design. It is always amazing how many ideas come up if you put 25 people for a day in a room :-) We followed this week up on some of the ideas related to new means for communication – there are defiantly interesting student projects on this topic.
In the evening we had a half pecha-kucha (each person 10 slides of 20 seconds – in total 3:20, the original is 20 slides) http://www.pecha-kucha.org/. It is a great way of getting quickly to know about work, research, ideas, and background of other people. It could be format we could use more in teaching a perhaps for ad-hoc sessions at a new conference we plan (e.g. http://auto-ui.org) … prepared my slides on the train in the morning – and it is more challenging that expected to get a set of meaningful pictures together for 10 slides.
Overall the workshop showed that there is a significant interest and expertise in Germany moving from software ergonomics to modern human computer interaction.
There is a new person on our team (starting next week) – perhaps you can spot him on the pics.
For a set of pictures see my photo gallery and the photos on flickr.
Labels:
begreifbar,
meeting,
project-topic,
tangible,
travel,
workshop
Sunday, 8 February 2009
Two basic references for interaction byond the desktop
Following the workshop I got a few questions on what the important papers are that one should read to start on the topic. There are many (e.g. search in google schoolar for tangible interaction, physical interaction, etc and you will see) and there conference dedicated to it (e.g. the tangible and embedded interaction TEI - next week in cambridge).
But if I have to pick two here is my joice:
[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 - 20, 2008). TEI '08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392
[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089
But if I have to pick two here is my joice:
[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 - 20, 2008). TEI '08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392
[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089
Friday, 6 February 2009
What happens if Design meets Pervasive Computing?
This morning I met with Claudius Lazzeroni, a colleague from Folkwang Hochschule (they were part of our University till two years ago).
They have different study programs in design and art related subjects. He showed me some projects (http://www.shapingthings.net/ - in German but lots of pictures that give you the idea). Many of the ideas and prototypes related to our work and I hope we get some joint projects going. I think it could be really exciting to have projects with design and computer science students - looking forward to this!
When I was in the UK we collaborated in the equator project with designers - mainly Bill Gaver and his group - and the results were really exciting [1]. We build a table that reacted to load changes on the surfaces and allowed you to fly virtually over the UK. The paper is worthwhile to read - if you are in a hurry have a look at the movie about it on youtube: http://www.youtube.com/watch?v=uRKOypmDDBM
There was a further project with a table - a key table - and for this one there more funny (and less serious?) video on youtube: http://www.youtube.com/watch?v=y6e_R5q-Uf4
[1] Gaver, W. W., Bowers, J., Boucher, A., Gellerson, H., Pennington, S., Schmidt, A., Steed, A., Villars, N., and Walker, B. 2004. The drift table: designing for ludic engagement. In CHI '04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 - 29, 2004). CHI '04. ACM, New York, NY, 885-900. DOI= http://doi.acm.org/10.1145/985921.985947
Subscribe to:
Posts (Atom)