Google+ circles are on a conceptual basis well argued (e.g. the much talked about real world analogy) but it seems they do not to well for many of us. I though I share my limited observations in a blog post (if I would have done a real study I would publish it in a top conference ;-)
To me deciding what circles I need and where to put people in these circles is pretty hard – ok I am in academia and this is not a typical environment (separation of work, hobby, friends...). Which of my co-workers are friends; do I differentiate between students in a course and the ones who do a thesis with me. Who belongs to “family” or do I need 5 or more categories to describe my family? It seems the number of circles is growing equally fast than the number of friends. Its probably just me who can not discriminate between different parts of live.
The implications of the many circles is that I have to make many more decisions than on facebook. If I accept an invitation it is a yes/no/not now decision in facebook (about 300-500ms plus the time to click ;-) … much longer with circles. When I post it is again time for making decisions – whom to include and who not to include.
The main issue with circles is for me the responsibility in sharing. In theory this is the great advantage – but in real live I think it is not (it is just a way of keeping old way of communication alive for some more time - if I want to address specific people I can use email ;-). As the others know that I have the choice to limit sharing to circles the expectation is that I manage this well. With whom should I share my unhappiness about a too long faculty meeting – thinking in circles – probably no one (or only the people waiting for me). Who should know that I have read an interesting article about planting bamboo – again in circles – probably only my wife because she asked me about it.
In summary this privilege (or the responsibility) to be able to specify whom we share information with make the posts much more predictable. I share with the HCI community the calls for papers, links to surveys we need participants, and the great papers we published, I share with the family the nice photo from our weekend hike, and I share with my students a link to a great article in the pervasive magazine they should read. Given my option to share to groups, sharing a photo of my daughter and me building a pneumatic lift with my students and colleagues would be inappropriate. However I argue that to share beyond circles – sharing things we would usually not share with this group – is what makes my facebook stream so much more exciting that the google+ stream. The comments of the people who I would not have included in a circle based addressing are the once which are often most interesting. From an information theoretical point of view the facebook stream has more entropy and carries massively more information as it is less predictable.
… and in facebook we (still) have an excuse (sort of plausible deniability) as there is no real responsibility for the sender to limit the receivers – it just a binary responsibility of is it OK to share or not.
Friday, 5 August 2011
Monday, 11 July 2011
cfp: IEEE Special Issue on Interaction Beyond the Keyboard
IEEE Computer will have a special issue on "Interaction beyond the Keyboard" ... and till Nov 1st 2011 you still have a chance to submit :-)
--- from the call (http://www.computer.org/portal/web/computingnow/cocfp4) ---
Final submissions due: 1 November 2011
Publication date: April 2012
IEEE Computer seeks submissions for an April 2012 special issue on interaction beyond the keyboard.
Interaction with computers has become an integral part of daily life for most people. When making a phone call, listening to music, taking a photo, getting money from an ATM, or driving a car, we operate computer systems with complex functionalities. As technologies progress, the proliferation of computing technologies increases, and simple user interfaces and ease of use are becoming key success factors for a wide range of products.
Although the keyboard and mouse are still the dominant user interfaces in home and office environments, with the massive increase in mobile device usage and the many new interaction technologies available, the way we interact with computers is becoming richer and more diverse. Touch-enabled surfaces, natural gestures, implicit interaction, and tangible user interfaces mark some of these trends.
The overall goal of interaction beyond the keyboard is to create natural and intuitive forms of human-computer interaction that make it easier for people to achieve their goals while using computers as tools.
For this special issue, we seek original research that describes groundbreaking new devices, methods, and approaches to human-computer interaction in a world of ubiquitous computer use. In particular, we're looking for exciting work that is concerned with the following topics:
-----
please see: http://www.computer.org/portal/web/computingnow/cocfp4
--- from the call (http://www.computer.org/portal/web/computingnow/cocfp4) ---
Final submissions due: 1 November 2011
Publication date: April 2012

Interaction with computers has become an integral part of daily life for most people. When making a phone call, listening to music, taking a photo, getting money from an ATM, or driving a car, we operate computer systems with complex functionalities. As technologies progress, the proliferation of computing technologies increases, and simple user interfaces and ease of use are becoming key success factors for a wide range of products.
Although the keyboard and mouse are still the dominant user interfaces in home and office environments, with the massive increase in mobile device usage and the many new interaction technologies available, the way we interact with computers is becoming richer and more diverse. Touch-enabled surfaces, natural gestures, implicit interaction, and tangible user interfaces mark some of these trends.
The overall goal of interaction beyond the keyboard is to create natural and intuitive forms of human-computer interaction that make it easier for people to achieve their goals while using computers as tools.
For this special issue, we seek original research that describes groundbreaking new devices, methods, and approaches to human-computer interaction in a world of ubiquitous computer use. In particular, we're looking for exciting work that is concerned with the following topics:
- interactive surfaces and tabletop computing;
- mobile computing user interfaces and interaction while on the go;
- tangible interaction and graspable user interfaces;
- embedded user interfaces and embodied interaction;
- natural interaction and gestures; and
- user interfaces based on physiological sensors and actuators.
-----
please see: http://www.computer.org/portal/web/computingnow/cocfp4
Tuesday, 5 July 2011
Percom 2012 - call for papers
Percom2012 - Call for papers as PDF or as text-file.
PerCom 2012
IEEE International Conference on Pervasive Computing and Communications
March 19 - 23, 2012, Lugano, Switzerland
CALL FOR PAPERS
IEEE PerCom, now in its 10th edition, has established itself as the premier annual scholarly venue in the areas of pervasive computing and communications. Pervasive computing and communications has evolved into an active area of research and development, due to the tremendous advances in a broad spectrum of technologies and topics including wireless networking, mobile and distributed computing, sensor systems, RFID technology, and the ubiquitous mobile phone.
PerCom 2012 will be held in Lugano, an international city and the crossroads and melting pot of European culture. PerCom 2012 will provide a leading edge, scholarly forum for researchers, engineers, and students alike to share their state-of-the art research and developmental work in the broad areas of pervasive computing and communications. The conference will feature a diverse mixture of interactive forums: core technical sessions of high quality cutting-edge research articles; targeted workshops on exciting topics; live demonstrations of pervasive computing in action; insightful keynote speeches; panel discussions from domain experts; and posters of budding ideas. Research contributions are solicited in all areas pertinent to pervasive computing and communications, including:
- Innovative pervasive computing applications
- Context modeling and reasoning
- Programming paradigms for pervasive systems
- Software evolution and maintenance in pervasive systems
- Middleware services and agent technologies
- Adaptive, autonomic and context-aware computing
- Mobile/Wireless computing systems and services in pervasive computing
- Energy-efficient and green pervasive computing
- Communication architectures for pervasive computing
- Ad hoc networks for pervasive communications
- Pervasive opportunistic communications and applications
- Enabling technologies for pervasive systems (e.g., wireless BAN, PAN)
- Positioning and tracking technologies
- Sensors and RFIDs in pervasive systems
- Multimodal sensing and context for pervasive applications
- Pervasive sensing, perception and semantic interpretation
- Smart devices and intelligent environments
- Trust, security and privacy issues in pervasive systems
- User interface, interaction, and persuasion
- Pervasive computing aspect of social network software
- Virtual immersive communications
- Wearable computers
- Standards and interfaces for pervasive computing environments
- Social and economic models for pervasive systems
Workshops and affiliated events:
Many workshops will be held in conjunction with the main conference. Workshop papers will be included and indexed in the IEEE digital libraries (Xplore), showing their affiliation with IEEE PerCom. As in the past, PerCom 2012 will also feature a PhD Forum, Demonstrations and a Work-in-Progress Session. Please see the website www.percom.org for details on current and past PerCom conferences.
Important Dates
Paper Registration: Sep 23, 2011
Paper Submission: Sep 26, 2011
Author Notification: Dec 20, 2011
Camera-ready Due: Jan 27, 2012
Submission Guidelines
Submitted papers must be unpublished and not considered elsewhere for publication. They must show significant relevance to pervasive computing and networking. Only electronic submissions in PDF format will be considered. Papers must be 9 pages or less, including references, figures and tables (at least 10pt font, 2-column format). The IEEE LaTeX and Microsoft Word templates, as well as formatting instructions, can be found at the conference web site. Submissions will undergo a rigorous review process handled by the Technical Program Committee. The best paper will receive the prestigious Mark Weiser Best Paper Award. Top selected papers will be considered for a special issue of the Elsevier journal of Pervasive and Mobile Computing (PMC)
For additional information, see www.percom.org for details on current and past PerCom conferences, or contact the PerCom 2012 organizing committee at percom2012@supsi.ch
Organizing Committee
General Co-Chairs
Silvia Giordano, SUPSI, CH
Marc Langheinrich, Univ. of Lugano, CH
Program Chair
Albrecht Schmidt, Univ. of Stuttgart, DE
Vice Program Co-Chairs
Jie Liu, Microsoft Research, USA
Georges Roussos, Univ. of London, UK
Alexander Varshavsky, AT&T Labs, USA
Workshops Co-Chairs
Pedro Marron, Univ. Duisburg-Essen, DE
Marius Portmann, Univ. of Queensland, AU
Steering Committee Chair
Marco Conti, IIT-CNR, IT
PerCom 2012
IEEE International Conference on Pervasive Computing and Communications
March 19 - 23, 2012, Lugano, Switzerland
CALL FOR PAPERS
IEEE PerCom, now in its 10th edition, has established itself as the premier annual scholarly venue in the areas of pervasive computing and communications. Pervasive computing and communications has evolved into an active area of research and development, due to the tremendous advances in a broad spectrum of technologies and topics including wireless networking, mobile and distributed computing, sensor systems, RFID technology, and the ubiquitous mobile phone.
PerCom 2012 will be held in Lugano, an international city and the crossroads and melting pot of European culture. PerCom 2012 will provide a leading edge, scholarly forum for researchers, engineers, and students alike to share their state-of-the art research and developmental work in the broad areas of pervasive computing and communications. The conference will feature a diverse mixture of interactive forums: core technical sessions of high quality cutting-edge research articles; targeted workshops on exciting topics; live demonstrations of pervasive computing in action; insightful keynote speeches; panel discussions from domain experts; and posters of budding ideas. Research contributions are solicited in all areas pertinent to pervasive computing and communications, including:
- Innovative pervasive computing applications
- Context modeling and reasoning
- Programming paradigms for pervasive systems
- Software evolution and maintenance in pervasive systems
- Middleware services and agent technologies
- Adaptive, autonomic and context-aware computing
- Mobile/Wireless computing systems and services in pervasive computing
- Energy-efficient and green pervasive computing
- Communication architectures for pervasive computing
- Ad hoc networks for pervasive communications
- Pervasive opportunistic communications and applications
- Enabling technologies for pervasive systems (e.g., wireless BAN, PAN)
- Positioning and tracking technologies
- Sensors and RFIDs in pervasive systems
- Multimodal sensing and context for pervasive applications
- Pervasive sensing, perception and semantic interpretation
- Smart devices and intelligent environments
- Trust, security and privacy issues in pervasive systems
- User interface, interaction, and persuasion
- Pervasive computing aspect of social network software
- Virtual immersive communications
- Wearable computers
- Standards and interfaces for pervasive computing environments
- Social and economic models for pervasive systems
Workshops and affiliated events:
Many workshops will be held in conjunction with the main conference. Workshop papers will be included and indexed in the IEEE digital libraries (Xplore), showing their affiliation with IEEE PerCom. As in the past, PerCom 2012 will also feature a PhD Forum, Demonstrations and a Work-in-Progress Session. Please see the website www.percom.org for details on current and past PerCom conferences.
Important Dates
Paper Registration: Sep 23, 2011
Paper Submission: Sep 26, 2011
Author Notification: Dec 20, 2011
Camera-ready Due: Jan 27, 2012
Submission Guidelines
Submitted papers must be unpublished and not considered elsewhere for publication. They must show significant relevance to pervasive computing and networking. Only electronic submissions in PDF format will be considered. Papers must be 9 pages or less, including references, figures and tables (at least 10pt font, 2-column format). The IEEE LaTeX and Microsoft Word templates, as well as formatting instructions, can be found at the conference web site. Submissions will undergo a rigorous review process handled by the Technical Program Committee. The best paper will receive the prestigious Mark Weiser Best Paper Award. Top selected papers will be considered for a special issue of the Elsevier journal of Pervasive and Mobile Computing (PMC)
For additional information, see www.percom.org for details on current and past PerCom conferences, or contact the PerCom 2012 organizing committee at percom2012@supsi.ch
Organizing Committee
General Co-Chairs
Silvia Giordano, SUPSI, CH
Marc Langheinrich, Univ. of Lugano, CH
Program Chair
Albrecht Schmidt, Univ. of Stuttgart, DE
Vice Program Co-Chairs
Jie Liu, Microsoft Research, USA
Georges Roussos, Univ. of London, UK
Alexander Varshavsky, AT&T Labs, USA
Workshops Co-Chairs
Pedro Marron, Univ. Duisburg-Essen, DE
Marius Portmann, Univ. of Queensland, AU
Steering Committee Chair
Marco Conti, IIT-CNR, IT
Friday, 1 July 2011
Our Article one Phones as Components of Future Appliances is published in IEEE Pervasive Magazine
In this paper we reflect the opportunities that arise from using consumer devices, such as phones and mp3 players, as components for future devices. With this article also a new department on Innovations in Ubicomp Products has been started. The article “Phones and MP3 Players as the Core Component in Future Appliances” [1] is also available openly in at ComputingNow.
The rational is
[1] Albrecht Schmidt and Dominik Bial. 2011. Phones and MP3 Players as the Core Component in Future Appliances. IEEE Pervasive Computing 10, 2 (April 2011), 8-11. DOI=10.1109/MPRV.2011.31 http://dx.doi.org/10.1109/MPRV.2011.31 (also available in ComputingNow, download PDF)
The rational is
- developing a custom embedded computer is expensive
- specific devices are not economic for small quantities
- phones are becoming cheap (in small quantities a phone may be cheaper than buying a touch screen component for an embedded device)
- development on phones has become easy and many developers are around
- IO capabilities can be added to these devices (e.g. Project HiJack)
[1] Albrecht Schmidt and Dominik Bial. 2011. Phones and MP3 Players as the Core Component in Future Appliances. IEEE Pervasive Computing 10, 2 (April 2011), 8-11. DOI=10.1109/MPRV.2011.31 http://dx.doi.org/10.1109/MPRV.2011.31 (also available in ComputingNow, download PDF)
Wednesday, 29 June 2011
Summer school in St Andrews, Teaching Context-Awareness
I had the privilege to teach a course on context-awareness [1] as part of the SICSA Summer School on Multimodal Systems for Digital Tourism. The summer school was directed by Aaron Quigley (University of St Andrews), Eva Hornecker (University of Strathclyde), Jon Oberlander (University of Edinburgh) and Stephen Brewster (University of Glasgow).
It was very exciting to discuss with the students ideas for novel digital devices to support tourists and come up with new concepts in this domain. Ideas ranged from interactive umbrellas (taking the concept described in [2] further) to digital souvenirs that ensure a lasting memory.
On Monday night Chris Speed gave an inspiring talk on ghosts, memories, and things reflecting on history, the Internet of things and how we perceive the world around us in a very though provoking way. Hi inspired us to think about the stories and memories that surround us and that are inherently linked to all things humans us. … it was in a long time a story about ghosts that made a lot of sense :-)
When going back we saw a great example of a security system that is based on physical constraints... you can open it from the inside but not from the outside:
Aaron asked me to talk on context-awareness. I did the talk along the lines of a soon to appear chapter on www.interaction-design.org. To me one of the – still remaining – fundamental challenges in HCI with context-aware systems is that the system as well as the human is adaptive. And as people learn often incredibly fast the adaptation may be contra-productive, hence it is essential to take this into account. Have a look at my slides if you like to learn more about context-awareness and HCI.
When we were there, we learned that St. Andrews is the place to play golf – the old course is where you need to go. Looking more closely it became clear that this is for others ;-) but there is a option for the rest of us. It is called the The Ladies putting Club St.Andrews “Himalayas” – just walk in and play (2 pound per person, and no need to book a year ahead). And if your friends don’t play golf you get away with the photos you take there as it is only 5 meters from the old course.
[1] http://dl.dropbox.com/u/5633502/talk/context-aware-systems-004-print-small.pdf
[2] Sho Hashimoto, Takashi Matsumoto. The Internet Umbrella. http://www.pileus.net/
It was very exciting to discuss with the students ideas for novel digital devices to support tourists and come up with new concepts in this domain. Ideas ranged from interactive umbrellas (taking the concept described in [2] further) to digital souvenirs that ensure a lasting memory.
On Monday night Chris Speed gave an inspiring talk on ghosts, memories, and things reflecting on history, the Internet of things and how we perceive the world around us in a very though provoking way. Hi inspired us to think about the stories and memories that surround us and that are inherently linked to all things humans us. … it was in a long time a story about ghosts that made a lot of sense :-)
When going back we saw a great example of a security system that is based on physical constraints... you can open it from the inside but not from the outside:
Aaron asked me to talk on context-awareness. I did the talk along the lines of a soon to appear chapter on www.interaction-design.org. To me one of the – still remaining – fundamental challenges in HCI with context-aware systems is that the system as well as the human is adaptive. And as people learn often incredibly fast the adaptation may be contra-productive, hence it is essential to take this into account. Have a look at my slides if you like to learn more about context-awareness and HCI.
When we were there, we learned that St. Andrews is the place to play golf – the old course is where you need to go. Looking more closely it became clear that this is for others ;-) but there is a option for the rest of us. It is called the The Ladies putting Club St.Andrews “Himalayas” – just walk in and play (2 pound per person, and no need to book a year ahead). And if your friends don’t play golf you get away with the photos you take there as it is only 5 meters from the old course.
[1] http://dl.dropbox.com/u/5633502/talk/context-aware-systems-004-print-small.pdf
[2] Sho Hashimoto, Takashi Matsumoto. The Internet Umbrella. http://www.pileus.net/
Wednesday, 22 June 2011
Somnometer – A Social Alarm Clock – Users Wanted!
We have continued our work on the social alarm clock for Android phones. The Somnometer App can be used as a regular alarm clock but offers functions to:
(1) rate your sleep
(2) monitor your sleep duration (manually based on wake-up time)
(3) have graphical representations of the sleep quality and duration
(4) optionally share some of this information with your friends on facebook
Are you interested in trying this alarm clock application? Please have a look at the app home page (http://somnometer.hcilab.org) or download it from the android market.
We are looking for volunteers to participate in a study with this alarm clock application. If you are interested in the new functions and if you are an active facebook user, please contact us. There will also be a chance to take part in a comparative study using a different sleep monitoring device and the alarm application. Our email address for the project is: somnometer@hcilab.org.
Monday, 20 June 2011
Self-expression, Belonging, and Respect – Is Taking Risks Part of it?

I sometimes feel in these discussions that I want to put things into perspective… We do a lot of things that are not reasonable in order to express ourselves and to present an image to our peer group (e.g. tattoos and piercings are common and there are risks associated). We want to belong to a group and hence we do things that are expected by our peers or even to impress them (e.g. doing a skateboard trick without protection or skiing where it is not allowed). If think hard there are probably many things you remember where you took major risks (when you were young)… On TV I saw a yesterday night a documentary on the Hippie movement in the 1960/1970. In comparison to the risks young people took in order to change the world (or to just be different and accepted in their peer group) the risks you take on the Internet seem very tame…
There is a further point we can learn from this: eventually society (and the law) will catch up and some of the innovations will stay and change society. But some will no be accepted… People need to explore boundaries – otherwise progress is unlikely.
For many people who have explored boundaries in 1970ies (ranging from drugs to violence – in a way we have agreed today is completely unacceptable) this has not hindered their careers. People generally see actions in context… Hence having the “wrong” photo on facebook is probably not harming someone’s career (but probably the time they spend on facebook rather than revising for exams may).
Friday, 17 June 2011
Gestural Input on a Touch Screen Steering Wheel in the Media
Shortly after the conference a journalist from discovery news picked the topic up and did some interviews. This resulted in an article: "Touch-Screen Steering Wheel Keeps Eyes on Road" (Discovery News, 6.6.2011)
After this it found its way around and appeared more widely than expected :-) examples include
- "Touch-screen steering wheel keeps drivers focused on the road" (Physorg, 6.6.2011)
- "German Researchers Working On Multi-touch Gestures Steering Wheel" (Softsailor, 15.6.2011)
- "New Touchscreen Steering Wheel May Reduce Driver Distractions" (Extreme Tech, 14. 6. 2011)
- "Researchers invent touchscreen steering wheel" (Sympatico.ca, 13.6.2011)
- "Touch Screen Steering Wheel Aims To Improve Driving Safety" (PSFK, 13.6.2011)
- "Researchers working on Touch Schreen Steering Wheel" (Pistonheads, 13.6.2011)
- http://www.n-tv.de/auto/Wischen-wechselt-Radiosender-article3572276.html
- http://www.auto.de/magazin/showArticle/article/55371/Touchscreen-Lenkrad-Wischen-wechselt-Radiosender
- http://www.focus.de/auto/news/touchscreen-lenkrad-wischen-wechselt-radiosender_aid_636714.html
[2] http://www.youtube.com/watch?v=R_32jOlQY7E Gestural Interaction on the Steering Wheel - Reducing the Visual Demand. chi2011madness Video.
Keynote at EICS 2011
From the research we did over the last 15 years I picked some lessons learned:
- Novelty may be about the values/ethics
- Implement it and try it out!
- 20% who like the UI/system are a large market
- Humans are smart and adaptive
- Design for creative users
[2] Pdnet project homepage: http://pd-net.org/
Tuesday, 17 May 2011
CHI 2011 in Vancouver, Keynote and Papers


This year we had a number of papers describing our research in CHI:
- Elba reported on the field study in Panama using mobile phones to enhance teaching and learning [1]
- Ali presented work on how to increase the connectedness between people by simple means of iconic communication in the context of a sports game [2]
- Tanja showed how touch and gestural input on a steering wheel can reduce the visual distraction for a driver [3], and
- Gilbert (from LMU Munich) presented work on interaction with cylindrical screens [4].
The most inspiring and at the same time the most controversial paper for me was the possessed hand by Jun Rekimoto et al. [5]. He reported their results in using electro stimulation in order to move fingers of a hand.
[1] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081 http://doi.acm.org/10.1145/1978942.1979081
[2] Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, and Albrecht Schmidt. 2011. Real-time nonverbal opinion sharing through mobile phones during sports events. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 307-310. DOI=10.1145/1978942.1978985 http://doi.acm.org/10.1145/1978942.1978985
[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010
[4] Gilbert Beyer, Florian Alt, Jörg Müller, Albrecht Schmidt, Karsten Isakovic, Stefan Klose, Manuel Schiewe, and Ivo Haulsen. 2011. Audience behavior around large interactive cylindrical screens. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 1021-1030. DOI=10.1145/1978942.1979095 http://doi.acm.org/10.1145/1978942.1979095
[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 543-552. DOI=10.1145/1978942.1979018 http://doi.acm.org/10.1145/1978942.1979018
Sunday, 17 April 2011
Floor for activity recognition
Patrick Baudisch showed in Paris at the Microsoft Software summit interesting photos of their DIY activity in creating a glass floor for tracking and activity recognition. With a fairly large glass pane in the floor they have created an interesting environment… I am sure there will be interesting things coming out of this installation.
Some years back in 2002 (and looking at the photos and the amount of hair I still had this seems long ago) in Lancaster we also looked in what to do with floors (and we were into DIY as well). We also considered arrangements with a floor tables and furniture on top. As you can see from Kristof, Hans and me on the photo it was a fun project.
The positive point in using load sensing is that you can track unobtrusive and potentially large scales with little instrumentation. We even considered the possibility to put a house on 4 load cells and do activity recognition based on this. We never got around to building the house ;-) The problem with load sensing is that you can only track one moving object/subject at the time.
Looking at the signature of the load measured and doing some signal processing we could detect events – unobtrusive and cheap – but only for single events.
Interested in more details? Have a look at the publications on load sensing [1], on the interaction [2], and at a patent [3] describing the basic technology.
[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 - October 01, 2002). Springer-LNCS, London, 333-350.
[2] Schmidt, A.; Strohbach, M.; van Laerhoven, K. & Hans-W., G. 2003. Ubiquitous Interaction - Using Surfaces in Everyday Environments as Pointing Devices, Universal Access Theoretical Perspectives, Practice, and Experience (UI4ALL 2003), Springer LNCS, 263-279.
[3] Schmidt, A., Strohbach, M., Van Laerhoven, K., Friday, A., Gellersen, H-W., Kubach, U.; Context acquisition based on load sensing. US Patent 7434459. US Patent Issued on October 14, SAP AG (DE), 2008
Some years back in 2002 (and looking at the photos and the amount of hair I still had this seems long ago) in Lancaster we also looked in what to do with floors (and we were into DIY as well). We also considered arrangements with a floor tables and furniture on top. As you can see from Kristof, Hans and me on the photo it was a fun project.
The positive point in using load sensing is that you can track unobtrusive and potentially large scales with little instrumentation. We even considered the possibility to put a house on 4 load cells and do activity recognition based on this. We never got around to building the house ;-) The problem with load sensing is that you can only track one moving object/subject at the time.
Looking at the signature of the load measured and doing some signal processing we could detect events – unobtrusive and cheap – but only for single events.
Interested in more details? Have a look at the publications on load sensing [1], on the interaction [2], and at a patent [3] describing the basic technology.
[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 - October 01, 2002). Springer-LNCS, London, 333-350.
[2] Schmidt, A.; Strohbach, M.; van Laerhoven, K. & Hans-W., G. 2003. Ubiquitous Interaction - Using Surfaces in Everyday Environments as Pointing Devices, Universal Access Theoretical Perspectives, Practice, and Experience (UI4ALL 2003), Springer LNCS, 263-279.
[3] Schmidt, A., Strohbach, M., Van Laerhoven, K., Friday, A., Gellersen, H-W., Kubach, U.; Context acquisition based on load sensing. US Patent 7434459. US Patent Issued on October 14, SAP AG (DE), 2008
Labels:
activity recognition,
context,
DIY,
load sensing
Monday, 11 April 2011
WP7 Tutorial - part 5: Orientation and Acceleration - X,Y,Z
Detecting gestures, orientations, and movement can be realized with the accelerometer. The accelerometer is a sensor that measures the acceleration in 3 dimensions (X, Y, and Z). If the device is not moved the accelerations measure are the gravity forces in each direction. If the device is accelerated the measured results are a combination of the acceleration and the gravity.
The accelerometer data can be accessed via the AccelerometerReadingEventArgs class. The class has values for the X, Y, and Z axis. The values are of type double and between -2 and 2 which related to acceleration "for each axis in gravitational units" - 1 is the gravitational force of the earth. See: http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx and http://msdn.microsoft.com/en-us/library/ff431810(v=vs.92).aspx or on page 80ff, C. Petzold, Programming Windows Phone 7.
A typical exercise for understanding the accelerometer is to create a bubble level (a tool to measure if something is horizontal or vertical - e.g. for hanging pictures on the wall). You probably want to freshen up on arctan2 - at least I needed ;-)
See below the c# example reading out the accelerometer on a windows phone 7. You can also download the accelerometer project directory in a single ZIP-file.
The accelerometer data can be accessed via the AccelerometerReadingEventArgs class. The class has values for the X, Y, and Z axis. The values are of type double and between -2 and 2 which related to acceleration "for each axis in gravitational units" - 1 is the gravitational force of the earth. See: http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx and http://msdn.microsoft.com/en-us/library/ff431810(v=vs.92).aspx or on page 80ff, C. Petzold, Programming Windows Phone 7.
A typical exercise for understanding the accelerometer is to create a bubble level (a tool to measure if something is horizontal or vertical - e.g. for hanging pictures on the wall). You probably want to freshen up on arctan2 - at least I needed ;-)
See below the c# example reading out the accelerometer on a windows phone 7. You can also download the accelerometer project directory in a single ZIP-file.
using System; using System.Windows; using Microsoft.Phone.Controls; using Microsoft.Devices.Sensors; // A simple example to read the accelerometer and display the values // In order to make it work you have to add the refercerence to // Microsoft.Devices.Sensors to your project. To do this right-click // in the Solution Explorer on References and than choose add Reference // in the dialog then select Microsoft.Devices.Sensors // Albrecht Schmidt, University of Stuttgart // for a more comprehensive example see: // http://msdn.microsoft.com/en-us/library/ff431810(v=vs.92).aspx // http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx // and page 80ff, C. Petzold, Programming Windows Phone 7 namespace Accl_X_Y_Z { public partial class MainPage : PhoneApplicationPage { Accelerometer accelerometer; public MainPage() { InitializeComponent(); // create a new instance accelerometer = new Accelerometer(); // register a callback function for when values change accelerometer.ReadingChanged += new EventHandler<AccelerometerReadingEventArgs>(accelerometer_ReadingChanged); // start the accelerometer accelerometer.Start(); } void accelerometer_ReadingChanged(object sender, AccelerometerReadingEventArgs e) { // required as from here the textBlocks cannot be accessed Deployment.Current.Dispatcher.BeginInvoke(() => ChangeUI(e)); } void ChangeUI(AccelerometerReadingEventArgs e) { // show the values on the screen textBlock1.Text = "X: " + e.X.ToString("0.000"); textBlock2.Text = "Y: " + e.Y.ToString("0.000"); textBlock3.Text = "Z: " + e.Z.ToString("0.000"); } } }
Labels:
acceleration,
accelerometer,
orientation,
programming,
sensor,
tutorial,
windows phone 7,
wp7
WP7 Tutorial - part 4: Storing Data on the Phone
If you want to save high scores or preferences you need persistent memory on the phone. On a traditional computer you would create a file and store your information in the file; another option on a Windows PC would be store such information in the registry. For security reasons there is no API to access the file system and there is no global persistent memory across applications on a WP7.
In general there are two ways to store data: (1) in a application specific storage (isolated storage) on the phone or (2) remotely on the internet (or to use another buzzword "in the cloud").
In this example the use of the phone isolated storage API is demonstrated. It is shown how to store and retrieve name-value pairs on the phone (to people who programmed Java ME this is conceptually similar to the record store).
For more details see: http://msdn.microsoft.com/en-us/library/cc221360(v=VS.95).aspx and page 126ff, C. Petzold, Programming Windows Phone 7. It is also possible to created an isolated storage for files see: http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile(v=VS.95).aspx
See below the c# example using the local storage on a windows phone 7. You can also download the IsolatedStorageSettings project directory in a single ZIP-file.
In general there are two ways to store data: (1) in a application specific storage (isolated storage) on the phone or (2) remotely on the internet (or to use another buzzword "in the cloud").
In this example the use of the phone isolated storage API is demonstrated. It is shown how to store and retrieve name-value pairs on the phone (to people who programmed Java ME this is conceptually similar to the record store).
For more details see: http://msdn.microsoft.com/en-us/library/cc221360(v=VS.95).aspx and page 126ff, C. Petzold, Programming Windows Phone 7. It is also possible to created an isolated storage for files see: http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile(v=VS.95).aspx
See below the c# example using the local storage on a windows phone 7. You can also download the IsolatedStorageSettings project directory in a single ZIP-file.
using System; using System.Windows; using Microsoft.Phone.Controls; using System.IO.IsolatedStorage; // example of how to save to and load from the isolated application storage // this helps to create persistence storage within a single application // in this example it is shown how to do it for a string // Albrecht Schmidt, University of Stuttgart // For more details see: // http://msdn.microsoft.com/en-us/library/cc221360(v=VS.95).aspx // page 126ff, C. Petzold, Programming Windows Phone 7 // storing files/directory structures see: // http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile(v=VS.95).aspx namespace PhoneStorage { public partial class MainPage : PhoneApplicationPage { // Constructor public MainPage() { InitializeComponent(); } #region Save and Load Parameters from the Application Storage void saveToAppStorage(String ParameterName, String ParameterValue) { // use mySettings to access the Apps Storage IsolatedStorageSettings mySettings = IsolatedStorageSettings.ApplicationSettings; // check if the paramter is already stored if (mySettings.Contains(ParameterName)) { // if parameter exists write the new value mySettings[ParameterName] = ParameterValue; } else { // if parameter does not exist create it mySettings.Add(ParameterName, ParameterValue); } } String loadFromAppStorage(String ParameterName) { String returnValue = "_notSet_"; // use mySettings to access the Apps Storage IsolatedStorageSettings mySettings = IsolatedStorageSettings.ApplicationSettings; // check if the paramter exists if (mySettings.Contains(ParameterName)) { // if parameter exists write the new value mySettings.TryGetValue<String>(ParameterName, out returnValue); // alternatively the following statement can be used: // returnValue = (String)mySettings[ParameterName]; } return returnValue; } #endregion private void button1_Click(object sender, RoutedEventArgs e) { saveToAppStorage("myIdentifer1", "Last used @ " + System.DateTime.Now.ToString("HH:mm:ss")); textBox1.Text = "saved..."; } private void button2_Click(object sender, RoutedEventArgs e) { textBox1.Text = loadFromAppStorage("myIdentifer1"); } } }
Labels:
programming,
storage,
tutorial,
windows phone 7,
wp7
Sunday, 27 March 2011
Online German Language Corpus, UCREL Summer School
Since I shared and office at Lancaster University with Paul Rayson from UCREL (University Centre for Computer Corpus Research on Language) I find corpus linguistics an interesting topic. By the way UCREL runs a Summer School in Corpus Linguistics from 13 to 15 July 2011 - would love to go there...
Friday, 25 March 2011
WP7 Tutorial - part 3: Using Location
The basic approach is to create an instance of GeoCoordinateWatcher and register two callback functions: one for when the status changes and one for when the location changes. The program demonstrates how these call backs are set up and how from within those function the user interface is updated with the received information. If the status is changes, the program checks what the current status is, and shows this in the status line (textBlock8.Text). If the position is changed then the new position information (Position.Location.Longitude, Position.Location.Latitude) - and additional information such as Speed, Altitude, Course, Accuracy are shown.
As an exercise you can build an application that shows you how close you are to a given target. In two input fields you enter the longitude and latitude of the destination (e.g. a geo cache location). And then you can calculate the difference from the current position to the target location and visualize or sonify the distance.
There is another example (Geo coordinate watcher) how to use this API on the Microsoft msdn website. In C. Petzold's book there is also a good example, see page 91ff.
See below the c# example using geo location on a windows phone 7. You can also download the geolocation project directory in a single ZIP-file.
using System;
using System.Collections.Generic;
using System.Windows;
using Microsoft.Phone.Controls;
using System.Device;
using System.Device.Location;
// the example shows the basic functionality of the location device
// you need to add in the solution explorer a reference to System.Device
// right click on References in the solution explorer, click Add Reference, and then
// System.Device
// Albrecht Schmidt, University of Stuttgart
// for a more comprehensive example see:
// http://msdn.microsoft.com/en-us/library/system.device.location.geocoordinatewatcher.aspx
// http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx
// and page 91ff, C. Petzold, Programming Windows Phone 7
namespace Geo_Location
{
public partial class MainPage : PhoneApplicationPage
{
GeoCoordinateWatcher watcher;
// Constructor
public MainPage()
{
InitializeComponent();
}
// the initialize and start button is pressed
private void button1_Click(object sender, RoutedEventArgs e)
{
// initialize the geo watcher with defaul accuracy (battery saving)
// user GeoPositionAccuracy.High for higher accuracy
watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.Default);
// set movement threhold - as distance in meters - default is 0
watcher.MovementThreshold = 10;
// add a handler that is called when position is changed more than MovementThreshold
watcher.PositionChanged += new EventHandler<GeoPositionChangedEventArgs<GeoCoordinate>>(watcher_PositionChanged);
// a handler for status change
watcher.StatusChanged += new EventHandler<GeoPositionStatusChangedEventArgs>(watcher_StatusChanged);
// Start reading location data
watcher.Start();
}
void watcher_StatusChanged(object sender, GeoPositionStatusChangedEventArgs e)
{
// you cannot change the UI in this function -> you have to call the UI Thread
Deployment.Current.Dispatcher.BeginInvoke(() => ChangeStatusUI(e));
}
void ChangeStatusUI(GeoPositionStatusChangedEventArgs e)
{
String statusType="";
if ((e.Status) == GeoPositionStatus.Disabled)
{
statusType = "GeoPositionStatus.Disabled";
}
if ((e.Status) == GeoPositionStatus.Initializing)
{
statusType = "GeoPositionStatus.Initializing";
}
if ((e.Status) == GeoPositionStatus.NoData)
{
statusType = "GeoPositionStatus.NoData";
}
if ((e.Status) == GeoPositionStatus.Ready)
{
statusType = "GeoPositionStatus.Ready";
}
textBlock8.Text = statusType;
}
void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e)
{
// you cannot change the UI in this function -> you have to call the UI Thread
Deployment.Current.Dispatcher.BeginInvoke(() => ChangeUI(e));
}
void ChangeUI(GeoPositionChangedEventArgs<GeoCoordinate> e)
{
textBlock1.Text = "Longitute: " + e.Position.Location.Longitude;
textBlock2.Text = "Latitute: " + e.Position.Location.Latitude;
textBlock3.Text = "Speed: " + e.Position.Location.Speed;
textBlock4.Text = "Altitude: " + e.Position.Location.Altitude;
textBlock5.Text = "Course: " + e.Position.Location.Course;
textBlock6.Text = "Vertical Accuracy: " + e.Position.Location.VerticalAccuracy;
textBlock7.Text = "Horizontal Accuracy: " + e.Position.Location.HorizontalAccuracy;
textBlock8.Text = "location updated at " + System.DateTime.Now.ToString("HH:mm:ss");
}
// the stop button clicked ... stop the watcher
private void button2_Click(object sender, RoutedEventArgs e)
{
if (watcher != null) { watcher.Stop(); }
textBlock8.Text = "location reading stopped";
}
}
}
Labels:
Geocaching,
location,
programming,
tutorial,
windows phone 7,
wp7
Percom 2011 in Seattle, keynote




He made a case that end-users (individuals) should be able to bring together information about them and make use of it. On principle I like this idea to put the individual into control and allow them to exploit this data. For me this is however not a solution for data protection, as a certain part of individuals will sell their data - and in a free country there is probably very little society can do against it.

PS: Percom 2012 will be in Lugano with Silvia and Marc chairing the conference. And I have the honor to serve as program chair. See the web page for more information (will be available soon) or the photo of the call for papers here.
Labels:
data mining,
keynote,
percom,
social networks
Subscribe to:
Posts (Atom)