Category: Uncategorized

Low-cost multi-touch surfaces using a Wiimote and IR light pens

Via Hack a day:

Johnny Lee’s back again with his Wiimote interactive whiteboard. Commercial versions of these things are expensive and heavy. His technique doesn’t even need a projector, just a computer, a Wiimote and a simple IR emitting pen. The pen is just a stylus with an infrared LED in the tip.

Johnny Lee is back again indeed I posted about his method to track your fingers using a Wiimote earlier. This time he uses a the Wiimote’s infrared camera to track light pens (pens that emit an infrared light at the tip) on a surface to create an interactive whiteboard. It’s really nice that he can use any surface. You could use a projector in combination with an ordinary projection screen, a wall or a desk. If you don’t have a projector, you could turn any LCD display into a tablet surface.

Since the Wiimote can track up to four different points, these surfaces are also multi-touch. This means you can have multi-touch interaction on any projected image. It would be interesting to combine this with a steerable projector system.

[youtube:http://www.youtube.com/watch?v=5s5EvhHy7eQ]

The source code is available. I will definitely keep an eye on his Wii projects page.

Evaluating User Interface Systems Research

Alex pointed me to Evaluating User Interface Systems Research, an article by Dan R. Olsen Jr. that was published at UIST 2007 as part of a panel discussion.

Abstract:

The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. A set of criteria for evaluating new UI systems work is presented.

What I found interesting about this paper is that Olsen tries to address the problem of evaluating UI architectures and toolkits. We assume almost everything in HCI has to be validated by usability tests, while it doesn’t make sense to do so for toolkits and architectures. He proposes a set of alternative evaluation techniques. Olsen knows what he is talking about, as he created the impressive XWeb system.

The paper addresses the question “How should we evaluate new user interface systems so that true progress is being made?”. The author motivates this question by stating that UI systems research (e.g. toolkit or windowing system architecture and design) is still necessary if we want to move beyond the desktop. Lots of good research into input techniques needs better systems models. Multi-user, multi-touch systems are for example often forced into the standard mouse point model, but these systems produce inputs the size of a hand or finger and are used by multiple users at once. Multiple input points and multiple users are discarded when everything is compressed into the mouse/keyboard input model (although multiple users can usually be handled by using multiple mouse cursors). Systems based on one screen, one keyboard and one mouse are the new equivalent of command-line interfaces.

Olsen discusses a few benefits of a good UI systems architecture:

  • reduce development viscosity
  • least resistance to good solutions
  • lower skill barriers
  • power in common infrastructure
  • enabling scale

He then goes on to discuss the usability trap. According to Olsen, usability testing has three key assumptions. Toolkit and UI architecture rarely meet these requirements. The first assumption is that users have minimal training (“walk up and use”). It is clear that any toolkit needs expertise in using it. Secondly, to compare systems (or techniques) we assume that there is task that is reasonably similar task between the two systems (“standardized task”). This is also violated by toolkits or UI architectures. Any problem that requires a system architecture or a toolkit is by nature complex and will have many possible paths to a solution. Meaningful comparisons between two tools for a realistic problem are confounded in many ways. Finally, we assume that it must be possible to complete any test in 1-2 hours (“scale of the problem”). Again, this is impossible with toolkits and UI architectures since building a significant application using two different tools would be very costly.

The usability trap is the idea that good HCI research by definition requires usability testing. Olsen clearly shows where usability testing is not suitable and proposes an alternative method to evaluate these systems. He also discusses that searching for “fatal flaws” in a system is devastating for systems research. It is virtually impossible for a small team of researchers to recreate all of the capabilities of existing systems. The omission of an important feature is guaranteed, and the existence of a fatal flaw is a given.

First, Olsen states that we should clearly specify our research in the context of situations, tasks and users (“STU”). He then discusses a few criteria that are useful to evaluate a system innovation, and shows how to demonstrate that the system complies to these criteria. The ones he discusses are:

  • Importance
  • Problem not previously solved
  • Generality
  • Reduce solution viscosity
    • Flexibility
    • Expressive leverage
    • Expressive match
  • Empowering new design participants
  • Power in combination
    • Inductive combination
    • Simplifying interconnection
    • Ease of combination
  • Can it scale up?

While I won’t go through all of these criteria, I’ll give a few examples. For instance, importance can be proved through the importance of the user population (“U”), the importance of the tasks (“T”) and the importance of the situations (“S”), e.g. how often do the target users find themselves in these situations and do they need to perform these tasks in those situations?

Expressive match is an estimate of how close the means for expressing design choices are to the problem being solved. It’s a way to reduce the solution viscosity (to reduce the effort required to iterate on many possible solutions). For example, one can express a color in hexadecimal or one can pop up a color picker that displays the color space in various ways and shows the color currently selected. The color picker is a much closer match to the design problem.

Simplifying interconnection comes down to reduce the cost of introducing a new component from N to 1. Suppose we have N components working together. If every component must implement an interconnection with every other component, then the N+1 component must include N interconnections with other pieces. A good interconnection model will reduce the cost of a new component from N to 1. An example would be that every new component must just implement the standard interface, after which it will be integrated with all other components. Olsen gives the example of pipes in UNIX.

Ease of combination illustrates the importance of interconnections to be simple and straightforward. As an example, Olsen refers to the simple HTTP protocol and REST architecture versus the overly complex SOAP protocol. This is no surprise since Olsen based XWeb on the WWW architecture.

It might be interesting to introduce this paper for the course Evaluation of user interfaces to give another perspective on evaluation methods.

Disqus commenting system

Disqus

I am now using Disqus to handle the comments on my blog. Disqus is a new global blog commenting system. It has a lot of nice features, such as avatars, threaded conversations and global notifications. It allows you to track when a comment you posted got replied to and will in the future even support SMS notifications (hopefully this will work in Belgium as well). The Disqus forum page for this blog can be found at http://intraction.disqus.com/.

They have a simple WordPress plugin that you can install. I ticked the option “Replace all entries with no comments (including future posts)” to keep my existing comments, and use Disqus for posts without comments. When they have an import system, I will probably import all existing comments into the system as well.

Robert Scoble did an interview with Daniel Ha, the CEO of Disqus.

Making things talk

A few weeks ago I came across a blog post by Cati Vaucelle about Making Things Talk, the new book by Tom Igoe. The book deals with building smart, communicating things. It is built up out of specific projects and uses practical examples to explain different technologies. Tom works at NYU ITC (where Adam Greenfield also works).

Through a series of simple projects, this book teaches you how to get your creations to communicate with one another by forming networks of smart devices that carry on conversations with you and your environment. Whether you need to plug some sensors in your home to the Internet or create a device that can interact wirelessly with other creations, Making Things Talk explains exactly what you need.

The book seemed really useful to me to learn how to build smart things and prototype a ubicomp environment. Unfortunately I was never really exposed to electronics, so this might be a good way to catch up I pointed Kris at the book who ordered a copy afterwards. I had a quick look at it, and I must say it is well-written and fun to read. You need some hardware to really dive in though.

Making Things Talk

The author uses Processing and Arduino as the basic building blocks. I was pleasantly surprised that the programming environment works perfectly under Mac OS X and GNU/Linux (while it also supports Windows). I would also like to experiment with it at home, for instance to build a remote-controlled mood light Apparently a Wii Nunchuk is also pretty popular for connecting to Arduino as it sports a 3-axis accelerometer, joystick and two buttons for under 20$ and uses the I2C protocol.

Adam Greenfield’s talk in Leuven

On Tuesday I went to the talk of Adam Greenfield in Leuven, organized by the Microsoft Research Chair on Intelligent Environments. The main topic of his talk was the social and ethical implications of ubiquitous computing. Adam started his talk by saying that there are a lot of ubicomps. He uses the term everyware to cover Weiserian ubicomp, pervasive computing, tangible media and ambient intelligence. Everyware is free from the baggage of Xerox PARC, free from politics and easy to understand. He defines it as distributed, networked information processing resources that are embedded in the environment. Adam sees everyware as inherently multi-disciplinary. He works at the NYU Interactive Telecommunications Program which hosts both people with an artistic and a technical background.

Everyware

Similar to Bell and Dourish Greenfield claims that the first stage of ubicomp is already here. Technologies such as RFID and NFC together with devices such as the iPod and iPhone are already ubiquitous computing devices. However, the way we interact with them has not yet changed significantly. He says we already have robust information processing in our environment today (and that this can be seen in standards such as IPv6 and in devices such as Proliphix Network Thermostats). An interesting point he made was that when he started giving these talks 18 months ago, most of the examples he used were research prototypes, but now most examples are commercial products. Furthermore, adoption of these new technologies and products is unproblematic. In Hong Kong, 95% of the people between 16 and 65 used the new Octopus RFID metro pass system.

Concerning the consequences of everyware, he referred to the Panopticon, an 18th century prison that was optimized for surveillance. The guards could see the prisoners’ cells at any time, while the prisoners could never see the guards. The prisoners’ default state was to be monitored, so they acted accordingly. So this might happen as well with everyware that is watching us and sending information out. We may get used to it, and just watch our steps more closely. Here is an example of such as prison (image courtesy of Wikipedia):

Presidio

Greenfield talked about the design of everyware, and referred to “design dissolving in behavior” by Naoto Fukasawa. It comes down to closely looking at people’s everyday behavior and trying to improve it with a solution that is as simple as possible. Design has to achieve an object without thought. People shouldn’t have to think about an object when using it. This also came up during the DIPSO workshop, and is a feature that was lacking from the i_AM table (How do you use it and what can you do with it?). As an example, Greenfield referred to the Octopus transit system again which allows you to quickly pass by the metro gate with an RFID-tagged metro pass in your pocket or bag since the reader’s range is large enough to read it as you walk by.

He continued with future issues and mentioned inadvertent, unknowing and unwilling use of everyware. The first issue can occur when you mistakenly publish your location to the whole world instead of to your closest friends. The cost of inadvertent use rises with everyware. Unknowing use might occur when a user walks over a sensor on the ground that recognizes when someone is walking on it, but the user does not know that he can later be identified since we all have a unique walking pattern. Finally, unwilling use can occur when people don’t want to use everyware technology but are forced to do so, e.g. you may need to use an RFID-enabled metro pass to get on the metro in Hong Kong. He also briefly discussed security (when all objects are connected one object might trigger behavior in another object, e.g. a failure could make your automatic garage door go up and down), and the digital burden of having to deal with your digital traces (Should I postulate this query? Can it be traced?).

Finally, Greenfield said it’s time to take everyware seriously and proposed 5 principles to design everyware:

  1. Be harmless
  2. Be self-disclosing
  3. Be conservative of face
  4. Be conservative of time
  5. Be deniable

Harmlessness refers to safety, everyware should always try to ensure users’ safety (physical, psychic, financial). It is graceful degradation taken further. The second principle refers to smart objects announcing their functionality. Greenfield proposes to use icons to indicate data collection, support for gestural interfaces or self-describing objects:

Information collected Gestural interface Self-describing object

The third principle means that everyware should not necessarily embarass, humiliate or shame its users. Society has a necessary membrane of protective hypocrisy according to Greenfield. Examples include the strict categorisations used on social networking websites such as Flickr (e.g. friends or family), or what happened to Robert Scoble on Facebook a while ago.

Principle 4 refers to everyware not introducing undue complication into ordinary operations (this is what Weiser actually referred as invisible computing in my opinion). Everyware should not take over, and should assume that an adult, competent user knows what he or she is doing. Finally, users should always be given the ability to opt out (principle 5), with no penalty other than the inability to make use of the functionality that the ubicomp system offers. I believe that the last two principles cannot be realized by systems that make all the decisions for the user. An approach such as mixed-initiative interaction might be more appriopriate.

Adam also talked about the cultural differences in safety and social status in Europe, the US and Asia. For one, in Asia is it common to have talking doors and elevators (which Lode also noticed on his trip to Japan), while this would drive most European people nuts Everyware is by consequence not universal, and should take into account the cultural conventions of the country or region where it is deployed.

All in all, I found the talk very interesting. Although most principles seem obvious, I have seen a fair share of ubicomp systems that violate them. I especially liked his proposal for everyware icons. People are coming up with unique names for Wii gestures which might also help in announcing how to interact with a system.

I really like the Intelligent Environments initiative as it gave me the opportunity to see talks by Donald Norman, Boris De Ruyter (he replaced Emile Aarts) and now Adam Greenfield. The next speaker will be Kevin Warwick so that promises to be interesting as well (have a look at his homepage or a Wikipedia article about him)