Friday, 5 August 2011

Complex circles, decision-making, expectations, plausible deniability

Google+ circles are on a conceptual basis well argued (e.g. the much talked about real world analogy) but it seems they do not to well for many of us. I though I share my limited observations in a blog post (if I would have done a real study I would publish it in a top conference ;-)

To me deciding  what circles I need and where to put people in these circles is pretty hard – ok I am in academia and this is not a typical environment (separation of work, hobby, friends...). Which of my co-workers are friends; do I differentiate between students in a course and the ones who do a thesis with me. Who belongs to “family” or do I need 5 or more categories to describe my family? It seems the number of circles is growing equally fast than the number of friends. Its probably just me who can not discriminate between different parts of live.

The implications of the many circles is that I have to make many more decisions than on facebook. If I accept an invitation it is a yes/no/not now decision in facebook (about 300-500ms plus the time to click ;-) … much longer with circles. When I post it is again time for making decisions – whom to include and who not to include.

The main issue with circles is for me the responsibility in sharing. In theory this is the great advantage – but in real live I think it is not (it is just a way of keeping old way of communication alive for some more time - if I want to address specific people I can use email ;-). As the others know that I have the choice to limit sharing to circles the expectation is that I manage this well. With whom should I share my unhappiness about a too long faculty meeting – thinking in circles – probably no one (or only the people waiting for me). Who should know that I have read an interesting article about planting bamboo – again in circles – probably only my wife because she asked me about it.

In summary this privilege (or the responsibility) to be able to specify whom we share information with make the posts much more predictable. I share with the HCI community the calls for papers, links to surveys we need participants, and the great papers we published, I share with the family the nice photo from our weekend hike, and I share with my students a link to a great article in the pervasive magazine they should read. Given my option to share to groups, sharing a photo of my daughter and me building a pneumatic lift with my students and colleagues would be inappropriate. However I argue that to share beyond circles – sharing things we would usually not share with this group – is what makes my facebook stream so much more exciting that the google+ stream. The comments of the people who I would not have included in a circle based addressing are the once which are often most interesting. From an information theoretical point of view the facebook stream has more entropy and carries massively more information as it is less predictable.

… and in facebook we (still) have an excuse (sort of plausible deniability) as there is no real responsibility for the sender to limit the receivers – it just a binary responsibility of is it OK to share or not.

Monday, 11 July 2011

cfp: IEEE Special Issue on Interaction Beyond the Keyboard

IEEE Computer will have a special issue on "Interaction beyond the Keyboard" ... and till Nov 1st 2011 you still have a chance to submit :-)

--- from the call (http://www.computer.org/portal/web/computingnow/cocfp4) ---
Final submissions due: 1 November 2011
Publication date: April 2012

IEEE Computer seeks submissions for an April 2012 special issue on interaction beyond the keyboard.

Interaction with computers has become an integral part of daily life for most people. When making a phone call, listening to music, taking a photo, getting money from an ATM, or driving a car, we operate computer systems with complex functionalities. As technologies progress, the proliferation of computing technologies increases, and simple user interfaces and ease of use are becoming key success factors for a wide range of products.

Although the keyboard and mouse are still the dominant user interfaces in home and office environments, with the massive increase in mobile device usage and the many new interaction technologies available, the way we interact with computers is becoming richer and more diverse. Touch-enabled surfaces, natural gestures, implicit interaction, and tangible user interfaces mark some of these trends.

The overall goal of interaction beyond the keyboard is to create natural and intuitive forms of human-computer interaction that make it easier for people to achieve their goals while using computers as tools.

For this special issue, we seek original research that describes groundbreaking new devices, methods, and approaches to human-computer interaction in a world of ubiquitous computer use. In particular, we're looking for exciting work that is concerned with the following topics:
  • interactive surfaces and tabletop computing;
  • mobile computing user interfaces and interaction while on the go;
  • tangible interaction and graspable user interfaces;
  • embedded user interfaces and embodied interaction;
  • natural interaction and gestures; and
  • user interfaces based on physiological sensors and actuators.
Articles should be understandable to a broad audience of computing science and engineering professionals. The writing should be practical and original, avoiding a focus on theory, mathematics, jargon, and abstract concepts. All manuscripts are subject to peer-review on both technical merit and relevance to Computer's readership. Accepted papers will be professionally edited for content and style.

-----


please see: http://www.computer.org/portal/web/computingnow/cocfp4

Tuesday, 5 July 2011

Percom 2012 - call for papers

Percom2012 - Call for papers as PDF or as text-file.






  

PerCom 2012
IEEE International Conference on Pervasive Computing and Communications
March 19 - 23, 2012, Lugano, Switzerland

CALL FOR PAPERS

IEEE PerCom, now in its 10th edition, has established itself as the premier annual scholarly venue in the areas of pervasive computing and communications. Pervasive computing and communications has evolved into an active area of research and development, due to the tremendous advances in a broad spectrum of technologies and topics including wireless networking, mobile and distributed computing, sensor systems, RFID technology, and the ubiquitous mobile phone.

PerCom 2012 will be held in Lugano, an international city and the crossroads and melting pot of European culture. PerCom 2012 will provide a leading edge, scholarly forum for researchers, engineers, and students alike to share their state-of-the art research and developmental work in the broad areas of pervasive computing and communications. The conference will feature a diverse mixture of interactive forums: core technical sessions of high quality cutting-edge research articles; targeted workshops on exciting topics; live demonstrations of pervasive computing in action; insightful keynote speeches; panel discussions from domain experts; and posters of budding ideas. Research contributions are solicited in all areas pertinent to pervasive computing and communications, including:

- Innovative pervasive computing applications
- Context modeling and reasoning
- Programming paradigms for pervasive systems
- Software evolution and maintenance in pervasive systems
- Middleware services and agent technologies
- Adaptive, autonomic and context-aware computing
- Mobile/Wireless computing systems and services in pervasive computing
- Energy-efficient and green pervasive computing
- Communication architectures for pervasive computing
- Ad hoc networks for pervasive communications
- Pervasive opportunistic communications and applications
- Enabling technologies for pervasive systems (e.g., wireless BAN, PAN)
- Positioning and tracking technologies
- Sensors and RFIDs in pervasive systems
- Multimodal sensing and context for pervasive applications
- Pervasive sensing, perception and semantic interpretation
- Smart devices and intelligent environments
- Trust, security and privacy issues in pervasive systems
- User interface, interaction, and persuasion
- Pervasive computing aspect of social network software
- Virtual immersive communications
- Wearable computers
- Standards and interfaces for pervasive computing environments
- Social and economic models for pervasive systems

Workshops and affiliated events:

Many workshops will be held in conjunction with the main conference. Workshop papers will be included and indexed in the IEEE digital libraries (Xplore), showing their affiliation with IEEE PerCom. As in the past, PerCom 2012 will also feature a PhD Forum, Demonstrations and a Work-in-Progress Session. Please see the website www.percom.org for details on current and past PerCom conferences.

Important Dates
Paper Registration: Sep 23, 2011
Paper Submission: Sep 26, 2011
Author Notification: Dec 20, 2011
Camera-ready Due: Jan 27, 2012

Submission Guidelines
Submitted papers must be unpublished and not considered elsewhere for publication. They must show significant relevance to pervasive computing and networking. Only electronic submissions in PDF format will be considered. Papers must be 9 pages or less, including references, figures and tables (at least 10pt font, 2-column format). The IEEE LaTeX and Microsoft Word templates, as well as formatting instructions, can be found at the conference web site. Submissions will undergo a rigorous review process handled by the Technical Program Committee. The best paper will receive the prestigious Mark Weiser Best Paper Award. Top selected papers will be considered for a special issue of the Elsevier journal of Pervasive and Mobile Computing (PMC)

For additional information, see www.percom.org for details on current and past PerCom conferences, or contact the PerCom 2012 organizing committee at percom2012@supsi.ch

Organizing Committee

General Co-Chairs
Silvia Giordano, SUPSI, CH
Marc Langheinrich, Univ. of Lugano, CH

Program Chair
Albrecht Schmidt, Univ. of Stuttgart, DE

Vice Program Co-Chairs
Jie Liu, Microsoft Research, USA
Georges Roussos, Univ. of London, UK
Alexander Varshavsky, AT&T Labs, USA

Workshops Co-Chairs
Pedro Marron, Univ. Duisburg-Essen, DE
Marius Portmann, Univ. of Queensland, AU

Steering Committee Chair
Marco Conti, IIT-CNR, IT

Friday, 1 July 2011

Our Article one Phones as Components of Future Appliances is published in IEEE Pervasive Magazine

In this paper we reflect the opportunities that arise from using consumer devices, such as phones and mp3 players, as components for future devices. With this article also a new department on Innovations in Ubicomp Products has been started. The article “Phones and MP3 Players as the Core Component in Future Appliances” [1] is also available openly in at ComputingNow.

The rational is
  • developing a custom embedded computer is expensive
  • specific devices are not economic for small quantities
  • phones are becoming cheap (in small quantities a phone may be cheaper than buying a touch screen component for an embedded device)
  • development on phones has become easy and many developers are around
  • IO capabilities can be added to these devices (e.g. Project HiJack)
The main question is: why not use the consumer device as a part (potentially partly hidden) as computing platforms in new devices? There are examples but also some difficulties… read the article to get a more in-depth discussion.


[1] Albrecht Schmidt and Dominik Bial. 2011. Phones and MP3 Players as the Core Component in Future Appliances. IEEE Pervasive Computing 10, 2 (April 2011), 8-11. DOI=10.1109/MPRV.2011.31 http://dx.doi.org/10.1109/MPRV.2011.31 (also available in ComputingNow, download PDF)

Wednesday, 29 June 2011

Summer school in St Andrews, Teaching Context-Awareness

I had the privilege to teach a course on context-awareness [1] as part of the SICSA Summer School on Multimodal Systems for Digital Tourism. The summer school was directed by Aaron Quigley (University of St Andrews), Eva Hornecker (University of Strathclyde), Jon Oberlander (University of Edinburgh) and Stephen Brewster (University of Glasgow).


It was very exciting to discuss with the students ideas for novel digital devices to support tourists and come up with new concepts in this domain. Ideas ranged from interactive umbrellas (taking the concept described in [2] further) to digital souvenirs that ensure a lasting memory.

On Monday night Chris Speed gave an inspiring talk on ghosts, memories, and things reflecting on history, the Internet of things and how we perceive the world around us in a very though provoking way. Hi inspired us to think about the stories and memories that surround us and that are inherently linked to all things humans us. … it was in a long time a story about ghosts that made a lot of sense :-)
When going back we saw a great example of a security system that is based on physical constraints... you can open it from the inside but not from the outside:


Aaron asked me to talk on context-awareness. I did the talk along the lines of a soon to appear chapter on www.interaction-design.org. To me one of the – still remaining – fundamental challenges in HCI with context-aware systems is that the system as well as the human is adaptive. And as people learn often incredibly fast the adaptation may be contra-productive, hence it is essential to take this into account. Have a look at my slides if you like to learn more about context-awareness and HCI.


When we were there, we learned that St. Andrews is the place to play golf – the old course is where you need to go. Looking more closely it became clear that this is for others ;-) but there is a option for the rest of us. It is called the The Ladies putting Club St.Andrews “Himalayas” – just walk in and play (2 pound per person, and no need to book a year ahead). And if your friends don’t play golf you get away with the photos you take there as it is only 5 meters from the old course.

[1] http://dl.dropbox.com/u/5633502/talk/context-aware-systems-004-print-small.pdf
[2] Sho Hashimoto, Takashi Matsumoto. The Internet Umbrella.  http://www.pileus.net/

Wednesday, 22 June 2011

Somnometer – A Social Alarm Clock – Users Wanted!


We have continued our work on the social alarm clock for Android phones. The Somnometer App can be used as a regular alarm clock but offers functions to:
(1) rate your sleep
(2) monitor your sleep duration (manually based on wake-up time)
(3) have graphical representations of the sleep quality and duration
(4) optionally share some of this information with your friends on facebook

Are you interested in trying this alarm clock application? Please have a look at the app home page (http://somnometer.hcilab.org) or download it from the android market.

We are looking for volunteers to participate in a study with this alarm clock application. If you are interested in the new functions and if you are an active facebook user, please contact us. There will also be a chance to take part in a comparative study using a different sleep monitoring device and the alarm application. Our email address for the project is: somnometer@hcilab.org.

Monday, 20 June 2011

Self-expression, Belonging, and Respect – Is Taking Risks Part of it?

Seeing someone walking up the leaning tower in Pisa with shoes that were clearly not designed for this situation I wondered about the risks people take in live. We recently had a discussion (with other parents) on the risks kids take today in the digital world – put up regrettable pictures flickr, liking a politically incorrect site on facebook, or posting silly things on twitter.

I sometimes feel in these discussions that I want to put things into perspective… We do a lot of things that are not reasonable in order to express ourselves and to present an image to our peer group (e.g. tattoos and piercings are common and there are risks associated). We want to belong to a group and hence we do things that are expected by our peers or even to impress them (e.g. doing a skateboard trick without protection or skiing where it is not allowed). If think hard there are probably many things you remember where you took major risks (when you were young)…  On TV I saw a yesterday night a documentary on the Hippie movement in the 1960/1970. In comparison to the risks young people took in order to change the world (or to just be different and accepted in their peer group) the risks you take on the Internet seem very tame…

There is a further point we can learn from this: eventually society (and the law) will catch up and some of the innovations will stay and change society. But some will no be accepted… People need to explore boundaries – otherwise progress is unlikely.

For many people who have explored boundaries in 1970ies (ranging from drugs to violence – in a way we have agreed today is completely unacceptable) this has not hindered their careers. People generally see actions in context…  Hence having the “wrong” photo on facebook is probably not harming someone’s career (but probably the time they spend on facebook rather than revising for exams may).

Friday, 17 June 2011

Gestural Input on a Touch Screen Steering Wheel in the Media

At CHI 2011 we presented initial  work on how to use gestural input on a multi-touch steering wheel [1]; a 20 second video is also available [2]. The paper described a prototype - a steering wheel where the entire surface is a display and can recognize touch input. The study had two parts. In the first part we identified a natural gesture set for interaction and in the second part we looked at how such interaction impacts the visual demand for the diver. The results in short: using gestural input on the steering wheel reduces the visual demand for the driver.

Shortly after the conference a journalist from discovery news picked the topic up and did some interviews. This resulted in an article: "Touch-Screen Steering Wheel Keeps Eyes on Road" (Discovery News, 6.6.2011)

ACM Tech News mentioned the Discovery News article News (ACM Tech News June 8 2011).
After this it found its way around and appeared more widely than expected :-) examples include
 There were also a German article "Touchscreen-Lenkrad Wischen wechselt Radiosender" (sp-x, 14.6.2011), e.g. found in:
[1] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010

[2] http://www.youtube.com/watch?v=R_32jOlQY7E Gestural Interaction on the Steering Wheel - Reducing the Visual Demand. chi2011madness Video.

Keynote at EICS 2011

 I was invited to present a keynote at EICS 2011 in Pisa. In the talk "Engineering Interactive Ubiquitous Computing Systems" I motivated why user interface engineering approaches are well suited for creating user interfaces in the context of embedded and ubiquitous computing systems. Looking at desktop applications and mobile devices I think the quality and ease of use is high - compared to 20 years back or compared to embedded and ubiquitous computing systems. I think a lot of user interface research, and in particular engineering approaches for interactive systems, could have a great impact on real world systems beyond the desktop or phone.


As one example of an engineering process for embedded user interfaces I shared our experience with developing Gazemarks [1]. Gazemarks is a technology based on eye-gaze tracking that reduced the time required for attention switching. It eases tasks that require the user to move attention repeatedly between 2 or more displays or between the real world and a set of digital displays. Application domains could be looking at the street and at the satnav while driving or switching attention between a screen in an operating theatre and the patient.

When investigating development from an embedded user interface to interactive ubiquitous computing systems further issues come up. As we investigate the PDnet project [2] with public displays we see that the concerns of the stakeholders play a much bigger role than in traditional systems and that finding an appropriate business model is very close to the user interface development process. 

In the final part of the talk I shared a future vision of how technology may change the way we live. In the not so distant future we could imagine that the traditional boundaries of perception (mainly temporal and spatial) will fall [3]. This would create an entirely new experience where "Perception beyond the here a now" change fundamentally the way we see and experience the world. The slides  of the keynote are available in as PDF.

From the research we did over the last 15 years I picked some lessons learned:
  • Novelty may be about the values/ethics
  • Implement it and try it out!
  • 20% who like the UI/system are a large market
  • Humans are smart and adaptive
  • Design for creative users
[1] Dagmar Kern, Paul Marshall, and Albrecht Schmidt. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international conference on Human factors in computing systems (CHI '10). ACM, New York, NY, USA, 2093-2102. DOI=10.1145/1753326.1753646 http://doi.acm.org/10.1145/1753326.1753646

[2] Pdnet project homepage: http://pd-net.org/

[3] Albrecht Schmidt, Marc Langheinrich, and Kritian Kersting. 2011. Perception beyond the Here and Now. Computer 44, 2 (February 2011), 86-88. DOI=10.1109/MC.2011.54 http://dx.doi.org/10.1109/MC.2011.54

Tuesday, 17 May 2011

CHI 2011 in Vancouver, Keynote and Papers

In the opening keynote Howard Rheingold proclaimed that we are in a time for learners and he outlined the possibilities that arise from the interactive media that is available to us. In particular he highlighted the fact that people share and link content and to him this is at the heart of learning. Learning as a joined process where contributions by students - in different forms of media - become major a resource was one example.

I best liked his analogy on how little innovation there is in teaching. "If you take a warrior from 1000 years ago on a battlefield today - they will die - quickly. If you take a surgeon from a 1000 years ago and put them in a modern hospital - they will be lost. If you take a professor from 1000 years ago and put them in a University today he will exactly know what to do. " I am not sure about the 1000 years but it by 100 years the story works just as well. In essence he argued that there is a lot of potential for new approaches for teaching and learning.

After initially agreeing I gave it some more thoughts and perhaps the little change in learning and teaching shows that learning is very fundamental and technology is overrated in this domain? What is more effective than a teachers discussing in an exciting topic face to face with a small set of students - perhaps even while on a walk? Reminds me about things I read about the Greek teachers and there practices several thousand years ago … and it makes me looking forward to our summer school in the Italian Alps (http://www.ferienakademie.de/).

I found the SIGCHI Lifetime Achievement Award lectures very exciting and educational. Especially the talk by Larry Tesler provided deep insight into how innovation works in user interfaces - beyond the academic environment. He talked about the "invention" of cut and paste - very enjoyable!






This year we had a number of papers describing our research in CHI:
  •  Elba reported on the field study in Panama using mobile phones to enhance teaching and learning [1]
  • Ali presented work on how to increase the connectedness between people by simple means of iconic communication in the context of a sports game [2]
  • Tanja showed how touch and gestural input on a steering wheel can reduce the visual distraction for a driver [3], and
  • Gilbert (from LMU Munich) presented work on interaction with cylindrical screens [4].

The most inspiring and at the same time the most controversial paper for me was the possessed hand by Jun Rekimoto et al. [5]. He reported their results in using electro stimulation in order to move fingers of a hand.

Bill Buxton showed throughout the conference his collection of input and output devices (Buxton Collection). Seeing the collection physically is really exciting, but for all who did not have the chance there is a comprehensive online version with photos and details available at micosoft research: http://research.microsoft.com/en-us/um/people/bibuxton/buxtoncollection/

[1] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081 http://doi.acm.org/10.1145/1978942.1979081

[2] Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, and Albrecht Schmidt. 2011. Real-time nonverbal opinion sharing through mobile phones during sports events. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 307-310. DOI=10.1145/1978942.1978985 http://doi.acm.org/10.1145/1978942.1978985

[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010

[4] Gilbert Beyer, Florian Alt, Jörg Müller, Albrecht Schmidt, Karsten Isakovic, Stefan Klose, Manuel Schiewe, and Ivo Haulsen. 2011. Audience behavior around large interactive cylindrical screens. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 1021-1030. DOI=10.1145/1978942.1979095 http://doi.acm.org/10.1145/1978942.1979095

[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 543-552. DOI=10.1145/1978942.1979018 http://doi.acm.org/10.1145/1978942.1979018

Sunday, 17 April 2011

Floor for activity recognition

Patrick Baudisch showed in Paris at the Microsoft Software summit interesting photos of their DIY activity in creating a glass floor for tracking and activity recognition. With a fairly large glass pane in the floor they have created an interesting environment… I am sure there will be interesting things coming out of this installation.

Some years back in 2002 (and looking at the photos and the amount of hair I still had this seems long ago) in Lancaster we also looked in what to do with floors (and we were into DIY as well). We also considered arrangements with a floor tables and furniture on top. As you can see from Kristof, Hans and me on the photo it was a fun project.

The positive point in using load sensing is that you can track unobtrusive and potentially large scales with little instrumentation. We even considered the possibility to put a house on 4 load cells and do activity recognition based on this. We never got around to building the house ;-) The problem with load sensing is that you can only track one moving object/subject at the time.

Looking at the signature of the load measured and doing some signal processing we could detect events – unobtrusive and cheap – but only for single events.

Interested in more details? Have a look at the publications on load sensing [1], on the interaction [2], and at a patent [3] describing the basic technology.

[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 - October 01, 2002). Springer-LNCS, London, 333-350.

[2] Schmidt, A.; Strohbach, M.; van Laerhoven, K. & Hans-W., G. 2003. Ubiquitous Interaction -  Using Surfaces in Everyday Environments as Pointing Devices, Universal Access Theoretical Perspectives, Practice, and Experience (UI4ALL 2003), Springer LNCS, 263-279.

[3] Schmidt, A., Strohbach, M., Van Laerhoven, K., Friday, A., Gellersen, H-W., Kubach, U.; Context acquisition based on load sensing. US Patent 7434459. US Patent Issued on October 14, SAP AG (DE), 2008

Monday, 11 April 2011

WP7 Tutorial - part 5: Orientation and Acceleration - X,Y,Z

Detecting gestures, orientations, and movement can be realized with the accelerometer. The accelerometer is a sensor that measures the acceleration in 3 dimensions (X, Y, and Z). If the device is not moved the accelerations measure are the gravity forces in each direction. If the device is accelerated the measured results are a combination of the acceleration and the gravity.

The accelerometer data can be accessed via the AccelerometerReadingEventArgs class. The class has values for the X, Y, and Z axis. The values are of type double and between -2 and 2 which related to acceleration "for each axis in gravitational units" - 1 is the gravitational force of the earth. See: http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx and http://msdn.microsoft.com/en-us/library/ff431810(v=vs.92).aspx or on page 80ff, C. Petzold, Programming Windows Phone 7.

A typical exercise for understanding the accelerometer is to create a bubble level (a tool to measure if something is horizontal or vertical - e.g. for hanging pictures on the wall). You probably want to freshen up on arctan2 - at least I needed ;-)

See below the c# example reading out the accelerometer on a windows phone 7. You can also download the accelerometer project directory in a single ZIP-file.

using System;
using System.Windows;
using Microsoft.Phone.Controls;
using Microsoft.Devices.Sensors;

// A simple example to read the accelerometer and display the values
// In order to make it work you have to add the refercerence to 
// Microsoft.Devices.Sensors to your project. To do this right-click
// in the Solution Explorer on References and than choose add Reference
// in the dialog then select Microsoft.Devices.Sensors
// Albrecht Schmidt, University of Stuttgart

// for a more comprehensive example see: 
// http://msdn.microsoft.com/en-us/library/ff431810(v=vs.92).aspx
// http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx
// and page 80ff, C. Petzold, Programming Windows Phone 7

namespace Accl_X_Y_Z
{
    public partial class MainPage : PhoneApplicationPage
    {
        Accelerometer accelerometer;
    
        public MainPage()
        {
            InitializeComponent();
            // create a new instance
            accelerometer = new Accelerometer();
            // register a callback function for when values change
            accelerometer.ReadingChanged += new EventHandler<AccelerometerReadingEventArgs>(accelerometer_ReadingChanged);
            // start the accelerometer
            accelerometer.Start();
        }

        void accelerometer_ReadingChanged(object sender, AccelerometerReadingEventArgs e)
        {
            // required as from here the textBlocks cannot be accessed
            Deployment.Current.Dispatcher.BeginInvoke(() => ChangeUI(e));
        }
        
        void ChangeUI(AccelerometerReadingEventArgs e)
        {
            // show the values on the screen
            textBlock1.Text = "X: " + e.X.ToString("0.000");
            textBlock2.Text = "Y: " + e.Y.ToString("0.000");
            textBlock3.Text = "Z: " + e.Z.ToString("0.000");
        }
    }
}

WP7 Tutorial - part 4: Storing Data on the Phone

If you want to save high scores or preferences you need persistent memory on the phone. On a traditional computer you would create a file and store your information in the file; another option on a Windows PC would be store such information in the registry. For security reasons there is no API to access the file system and there is no global persistent memory across applications on a WP7.

In general there are two ways to store data: (1) in a application specific storage (isolated storage) on the phone or (2) remotely on the internet (or to use another buzzword "in the cloud").
In this example the use of the phone isolated storage API is demonstrated. It is shown how to store and retrieve name-value pairs on the phone (to people who programmed Java ME this is conceptually similar to the record store).

For more details see: http://msdn.microsoft.com/en-us/library/cc221360(v=VS.95).aspx and page 126ff, C. Petzold, Programming Windows Phone 7. It is also possible to created an isolated storage for files see: http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile(v=VS.95).aspx

See below the c# example using the local storage on a windows phone 7. You can also download the IsolatedStorageSettings project directory in a single ZIP-file.

using System;
using System.Windows;
using Microsoft.Phone.Controls;
using System.IO.IsolatedStorage;

// example of how to save to and load from the isolated application storage 
// this helps to create persistence storage within a single application
// in this example it is shown how to do it for a string 
// Albrecht Schmidt, University of Stuttgart  


// For more details see:
// http://msdn.microsoft.com/en-us/library/cc221360(v=VS.95).aspx
// page 126ff, C. Petzold, Programming Windows Phone 7

// storing files/directory structures see:
// http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragefile(v=VS.95).aspx

namespace PhoneStorage
{
    public partial class MainPage : PhoneApplicationPage
    {
        // Constructor
        public MainPage()
        {
            InitializeComponent();
        }

        #region Save and Load Parameters from the Application Storage
        void saveToAppStorage(String ParameterName, String ParameterValue)
        {
            // use mySettings to access the Apps Storage
            IsolatedStorageSettings mySettings = IsolatedStorageSettings.ApplicationSettings;

            // check if the paramter is already stored
            if (mySettings.Contains(ParameterName))
            {
                // if parameter exists write the new value
                mySettings[ParameterName] = ParameterValue;
            }
            else
            {
                // if parameter does not exist create it
                mySettings.Add(ParameterName, ParameterValue);
            }
        }

        String loadFromAppStorage(String ParameterName)
        {
            String returnValue = "_notSet_";
            // use mySettings to access the Apps Storage
            IsolatedStorageSettings mySettings = IsolatedStorageSettings.ApplicationSettings;

            // check if the paramter exists
            if (mySettings.Contains(ParameterName))
            {
                // if parameter exists write the new value
                mySettings.TryGetValue<String>(ParameterName, out returnValue);
                // alternatively the following statement can be used:
                // returnValue = (String)mySettings[ParameterName];
            }

            return returnValue;
        }
        #endregion

        private void button1_Click(object sender, RoutedEventArgs e)
        {
            saveToAppStorage("myIdentifer1", "Last used @ " + System.DateTime.Now.ToString("HH:mm:ss"));
            textBox1.Text = "saved...";
        }

        private void button2_Click(object sender, RoutedEventArgs e)
        {
            textBox1.Text = loadFromAppStorage("myIdentifer1");
        }

    }
}

Sunday, 27 March 2011

Online German Language Corpus, UCREL Summer School

At the University of Leipzig a German Language corpus is available (Projekt Deutscher Wortschatz). The database can be queried from different programming languages and access is also possible via a web service. Requests can ask for co-occurrences of words, base forms, about words that often occur to the right and to the left of the word, word frequency, synonyms and much more. If you develop text input systems this may be a very useful resource, see the web services overview page (with links to downloads), the list of web-service-requests offered or have a look at some php-examples.

You can try the service interactively at http://wortschatz.uni-leipzig.de/abfrage/. See the pictures for an example query on the term Internet. They also feature a German-English dictionary.

Since I shared and office at Lancaster University with Paul Rayson from UCREL (University Centre for Computer Corpus Research on Language) I find corpus linguistics an interesting topic. By the way UCREL runs a Summer School in Corpus Linguistics from 13 to 15 July 2011 - would love to go there...

Friday, 25 March 2011

WP7 Tutorial - part 3: Using Location

In this example the use of the location API is demonstrated. The API is a high level interface to geo location. How the location is determined (e.g. GPS, GSM cell information) is of no concern to the developer.

The basic approach is to create an instance of GeoCoordinateWatcher and register two callback functions: one for when the status changes and one for when the location changes. The program demonstrates how these call backs are set up and how from within those function the user interface is updated with the received information. If the status is changes, the program checks what the current status is, and shows this in the status line (textBlock8.Text). If the position is changed then the new position information (Position.Location.Longitude, Position.Location.Latitude) - and additional information such as Speed, Altitude, Course, Accuracy are shown.

As an exercise you can build an application that shows you how close you are to a given target. In two input fields you enter the longitude and latitude of the destination (e.g. a geo cache location). And then you can calculate the difference from the current position to the target location and visualize or sonify the distance.

There is another example (Geo coordinate watcher) how to use this API on the Microsoft msdn website. In C. Petzold's book there is also a good example, see page 91ff.

See below the c# example using geo location on a windows phone 7. You can also download the geolocation project directory in a single ZIP-file.

using System;
using System.Collections.Generic;
using System.Windows;
using Microsoft.Phone.Controls;
using System.Device;
using System.Device.Location;

// the example shows the basic functionality of the location device
// you need to add in the solution explorer a reference to System.Device
// right click on References in the solution explorer, click Add Reference, and then
// System.Device
// Albrecht Schmidt, University of Stuttgart

// for a more comprehensive example see:
// http://msdn.microsoft.com/en-us/library/system.device.location.geocoordinatewatcher.aspx
// http://msdn.microsoft.com/en-us/library/ff431744(v=vs.92).aspx
// and page 91ff, C. Petzold, Programming Windows Phone 7

namespace Geo_Location
{
public partial class MainPage : PhoneApplicationPage
{
GeoCoordinateWatcher watcher;

// Constructor
public MainPage()
{
InitializeComponent();
}

// the initialize and start button is pressed
private void button1_Click(object sender, RoutedEventArgs e)
{
// initialize the geo watcher with defaul accuracy (battery saving)
// user GeoPositionAccuracy.High for higher accuracy
watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.Default);
// set movement threhold - as distance in meters - default is 0
watcher.MovementThreshold = 10;

// add a handler that is called when position is changed more than MovementThreshold
watcher.PositionChanged += new EventHandler<GeoPositionChangedEventArgs<GeoCoordinate>>(watcher_PositionChanged);
// a handler for status change
watcher.StatusChanged += new EventHandler<GeoPositionStatusChangedEventArgs>(watcher_StatusChanged);

// Start reading location data
watcher.Start();
}

void watcher_StatusChanged(object sender, GeoPositionStatusChangedEventArgs e)
{
// you cannot change the UI in this function -> you have to call the UI Thread
Deployment.Current.Dispatcher.BeginInvoke(() => ChangeStatusUI(e));
}

void ChangeStatusUI(GeoPositionStatusChangedEventArgs e)
{
String statusType="";
if ((e.Status) == GeoPositionStatus.Disabled)
{
statusType = "GeoPositionStatus.Disabled";
}
if ((e.Status) == GeoPositionStatus.Initializing)
{
statusType = "GeoPositionStatus.Initializing";
}
if ((e.Status) == GeoPositionStatus.NoData)
{
statusType = "GeoPositionStatus.NoData";
}
if ((e.Status) == GeoPositionStatus.Ready)
{
statusType = "GeoPositionStatus.Ready";
}
textBlock8.Text = statusType;
}

void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e)
{
// you cannot change the UI in this function -> you have to call the UI Thread
Deployment.Current.Dispatcher.BeginInvoke(() => ChangeUI(e));
}

void ChangeUI(GeoPositionChangedEventArgs<GeoCoordinate> e)
{
textBlock1.Text = "Longitute: " + e.Position.Location.Longitude;
textBlock2.Text = "Latitute: " + e.Position.Location.Latitude;
textBlock3.Text = "Speed: " + e.Position.Location.Speed;
textBlock4.Text = "Altitude: " + e.Position.Location.Altitude;
textBlock5.Text = "Course: " + e.Position.Location.Course;
textBlock6.Text = "Vertical Accuracy: " + e.Position.Location.VerticalAccuracy;
textBlock7.Text = "Horizontal Accuracy: " + e.Position.Location.HorizontalAccuracy;
textBlock8.Text = "location updated at " + System.DateTime.Now.ToString("HH:mm:ss");
}

// the stop button clicked ... stop the watcher
private void button2_Click(object sender, RoutedEventArgs e)
{
if (watcher != null) { watcher.Stop(); }
textBlock8.Text = "location reading stopped";
}
}
}

Percom 2011 in Seattle, keynote

This year's Percom conference was held in Seattle and offered an exciting and diverse program. Have a look the program to see it for yourself. The two keynotes were both looking at the implications of pervasive computing and communication - especially when thinking about the data is collected and how the data may be used.

Alex Pentland from MIT talked about their work on reality mining. The work looks at how one can capture interactions between people and between people and their environment and how such information can be exploited. One example he gave was on looking at the effect of face to face communication on the performance on workers. The basics insights of this work are thrilling and thinking it through it becomes obvious that we are at the start of new era of mankind. The arguments he made that we can contain and control such information I did not find convincing and I think it may be dangerous to tell decision makers in politics that we can provide solutions. I see no way (that is not restricting people's freedom massively or which reduces productivity massively) that would allow to control the information that will become available through pervasive computing… and all the solutions I have heard either will plainly not work or would require a global agreement over data protection laws…

The keynote on the second day was by Derek McAuley from Nottingham University. One of his topics was on product history and how the availability of product history has the potential to increase the value of products. I think this is a very powerful concept and we will in the near future see this commercially exploited.
Furthermore Dereck discussed interesting issues that come up with crowd sourcing and participatory sensing. One central point is where the data is hold and who controls the data collected. Especially in the context of cloud services this becomes transparent and important at the same time. With regard to the implementation is does not matter; however from a legal perspective it may make a serious difference whether you cloud service runs in German, the US, or on a ship somewhere in the Atlantic. An example he gave are navigation systems in cars which have a back channel. The cars sent back information about their speed and whereabouts and the information is used to predict the state of the road, which is then used to improve the navigation. He raised the questions what happens if this information is held somewhere were legislation has no control? I think this is going to happen and there is no real approach against it…
He made a case that end-users (individuals) should be able to bring together information about them and make use of it. On principle I like this idea to put the individual into control and allow them to exploit this data. For me this is however not a solution for data protection, as a certain part of individuals will sell their data - and in a free country there is probably very little society can do against it.

In summary - we are heading towards an exciting future!

PS: Percom 2012 will be in Lugano with Silvia and Marc chairing the conference. And I have the honor to serve as program chair. See the web page for more information (will be available soon) or the photo of the call for papers here.