Tag: ubicomp

Object recognition with video phones

Andrea Gaggioli blogged about the Pocket Supercomputer by Accenture. The original article was published by NewScientistTech:

Live video footage is fed from the handset to a central server, which rapidly matches on-screen objects to images previously entered into a database. The server then sends find relevant information and sends it back to user (…) The central server uses an algorithm called the Scale-Invariant Feature Transform to match objects. The algorithm uses hundreds or thousands of reference points, corresponding to physical features such as edges, corners or lettering, to find a match. The process works no matter how the object is oriented, but objects must first be carefully imaged and entered into the central database.

[youtube:http://www.youtube.com/watch?v=PkqUjQj8H3M]

This is certainly a step forward compared to RFID and 2D barcodes such as Semacodes or QR codes. It reminded me of Atom tags that could recognize existing logo’s and also used server-side shape analysis and pattern recognition.

[youtube:http://www.youtube.com/watch?v=B_7Yy-zQiRo]

Unlike these two techniques, the existing 2D barcodes are not human-readable.

VR: after the hype

Lode wrote in his last post (amongst others) about the fact that the hype of Virtual Reality is over. This doesn’t have to be negative in my opinion. Maybe having a fresh (and more realistic?) view on virtual reality and its possible uses can help.

As a comparison, look at the original promise of artificial intelligence (also called strong AI), versus the current, more realistic view (weak AI). Just as weak AI revived AI’s fortunes, Yvonne Rogers believes that Ubicomp research that enables people to become smart and proactive instead of focusing on a smart environment as in the original vision by Weiser can help bring success to the field.

Speaking of ubiquitous computing, I think that research in ubiquitous computing and more natural forms of interaction can benefit in some part from the previous work in Virtual Reality. Virtual Reality provided a way to interact with a three-dimensional world instead of using the traditional keyboard and mouse (albeit a virtual world), while one of the goals of ubiquitous computing is to interact in a natural way with the real world (which is of course three-dimensional).

Lode also referred to the Reality-Virtuality (RV) Continuum, which I hadn’t heard of yet. It will certainly be interesting to have a look at. I think it all depends on how you define things. Mark Weiser for example referred to ubiquitous computing as the opposite of Virtual Reality, namely embodied virtuality.

Ubicomp 2007: day two and three

I finally found some time to go through my notes from Ubicomp 2007. Since I already blogged about the first day I’m going to start this overview on Tuesday. This is not a complete overview, but just a list of talks that I found interesting.

The first session had a nice talk titled “My Roomba is Rambo” which studied why people got emotional about their appliances, and why we should care. This is similar to what Philips did with the iCat. Apparantly people seemed to forgive their appliances when they made mistakes, given that they were emotionally attached to them (e.g. helping a Roomba that got stuck).

The next session on location featured an interesting talk by David Dearman on a method to predict location errors. They evaluated their system by letting people locate posters as fast as possible, while varying the location error and using different algorithms to estimate the error, including their own. There were a lot of talks on security, including one in this session on security by spatial reference by Rene Mayrhofer. He made an interesting point, that the methods of security and authentication we use today (e.g. passwords) are inpractical for ubicomp environments.

Shwetak Patel presented his work on Tuesday as well. He received the best paper and best talk award. His idea was very innovative, namely to check for noise on the power lines in a house to detect activity (e.g. opening the microwave would turn on a light which could be detected). The system is quite accurate, although portable devices could be more difficult to support since a training period is required. In the same session there was another security talk on shaking two devices together and thereby generating a unique key for authentication. This illustrates that there were definitely a lot of creative ideas at Ubicomp.

Tuesday evening we had the conference dinner up in the mountains which was quite nice (with an Austrian traditional band that played all kinds of music, including Tom Jones), but it was very cold up there

Wednesday started with a talk by Tim Kindberg (of Cooltown fame) titled “Merolyn the Phone: a study of Bluetooth naming practices”. He started off with a slide that showed a list of names of detected Bluetooth phones in the conference room. Apparently, the people that featured in his study were more creative than we were (I was guilty as well with the not very original name “Jo’s K750i”). The story behind the name Merolyn the phone was pretty funny as well.

Next was a talk by Yvonne Rogers, of whom I read a very interesting article last year (after it was mentioned on Fabien’s blog). The talk was basically about how Ubicomp technology cannot be evaluated in a lab setting, and needs real-world testing.

Another interesting talk in this session discussed the Whereabouts clock, which reminded me vaguely of the AmbientClock. In the session on privacy Karen P. Tang (if I’m not mistaken) presented privacy controls in IMBuddy, a contextual instant messenger. They allowed people to disclose information at different levels of granularity and get notified when someone queried their presence.

In my opinion, the best presentation was given by Scott Davidoff, who presented speed dating as a method to quickly evaluate different design decisions. His slides are online at Slideshare.net.

The final talk by David Molyneaux showcased an impressive steerable projector system. The innovative part (according to my understanding) was that objects stored and controlled their data (e.g. sensor readings) and metadata (e.g. 3D model) themselves, and decided when to send this to the projector. For example, when two objects with the same appearance are in a room, and one is moving and the other one isn’t (detected with accelerometers), they notify the projector which can then distinguish between them. When an object’s geometry is changed (e.g. when a book is opened), it detects this through sensors and accordingly sends its updated 3D model to the projector.

All in all, the conference was very interesting as was the workshop.

Ubicomp 2007 first impressions

I’m at Ubicomp 2007 in Innsbruck at the moment. On Sunday, I presented our paper on Making Bits and Atoms Talk Today at the DIPSO 2007 workshop. The workshop was great with a lot of interesting discussions.

Today we had a session on Health, one on Networking, the late-breaking results, videos and demos, and of course the 1-Minute-Madness. The latter featured some funny moments when presenters still wanted to get noticed and stand out between the rest of the participants when their presentations failed. Unfortunately I did not take pictures, but I’m sure others did. Considering the content of the talks, both at the workshop and at the main conference it seemed that persuasive games are becoming a popular research topic.

A really impressive and useful system I saw today was Haggle. It tries to abstract the lower-level network protocols, allowing you for example to send an email to someone sitting next to you without requiring an internet connection (falling back on Bluetooth or ad-hoc P2P networking).

During the poster and demo session, there was one cool demo stand that almost constantly had about ten people standing around it: VoodooSketch. The authors presented a drawing program on an interactive table that allows you to draw your own user interface widgets, combined with tangible controls (buttons, knobs, etc.). You can attach these to a function by writing a label next to it. So you could for example write the label opacity next to a line you drew, which would then turn into a slider to control the opacity of the drawing.

The city of Innsbruck is very beautiful and offers you some of the most amazing views. The room the DIPSO workshop was held in had a large window looking out to the mountains which made it hard to stay concentrated

DIPSO 2007 paper accepted

The paper we submitted to DIPSO 2007 (a workshop at this year’s Ubicomp conference) has been accepted.

Title: Making Bits and Atoms Talk Today – A Practical Architecture for Smart Object Interaction

Authors: Jo Vermeulen, Ruben Thys, Kris Luyten and Karin Coninx

Overview figure for "Making Bits and Atoms Talk Today" paper at DIPSO 2007

Abstract:
Bringing together the physical and digital worlds has been the subject of research for some time now. In particular, a number of successful prototypes that link physical objects with digital information (often called smart object systems) have already been presented. However, a generally accepted architecture to design such systems has not yet emerged. This paper presents a reusable and practical framework for developing smart object applications today. At the basis of our approach lies the use of Semantic Web technology to drive interaction between the physical and digital worlds. We used this framework
to develop SemaNews, a novel application that combines the advantages of digital news feeds with those of physical newspapers. We prove that our architecture is reusable by building a second prototype in a different application domain: STalkingObjects implements the basic components of a store of the future.

Venue and date: Innsbruck, Austria, September 16, 2007