Tag: research

Living Tomorrow and public transport adventures

Yesterday I went to the FITCE event on the Internet of Things I blogged about earlier together with my colleague Geert Vanderhulst. At first, I wanted to go by car, but then I realized that meant going through the rush hour on the Brussels ring road. Eventually, we decided to take the train to Leuven, take another one from Leuven to Brussels North station, and from there take the train to Vilvoorde. Unfortunately there were some difficulties with the last element in this chain

Apparently there was a train that arrived at the exact same time and at the same platform at Brussels North station as the train we were supposed to take. This train also went to Antwerp Central Station, but had Amsterdam as its final destination. When we were on it, we realized too late that it didn’t stop in Vilvoorde. So we got out in Mechelen (the first stop) and took another train to Vilvoorde. Normally this train would be in Vilvoorde on time to allow us to take the bus to Living Tomorrow, but this evening it changed to an L train, meaning that it stops at every station on its way. When we finally arrived in Vilvoorde, the last bus to Living Tomorrow before 19:00 had already left. The next one was at 20:15. After asking a bus driver, we found another bus that stopped close to the venue (bus 47). After taking this bus, we finally arrived somewhere in the neighborhood of the Indringingsweg, but didn’t know where to go. Of course, then it started raining Luckily Geert had his satellite navigation system with him to lead us the way. When we finally arrived in the room, we had to pass by the speakers and all the lights went on, so we couldn’t make an unremarkable entrance

So what about the talks? Although it’s always interesting to see how people appreciate ubicomp technologies when they get integrated in their daily lives, I didn’t learn anything really new. A lot of the technologies or prototypes that were mentioned were familiar to me. One of the things I hadn’t heard about yet were washable RFID tags.

After the talks we got a tour through the house of the future. Again, a couple of the technologies they showed had already been integrated in real-life products or were already well investigated in research. There was a prototype by Volvo about parking sensors, dead angle cameras, lane tracking and a system to avoid collisions in traffic jams. The more advanced technologies here were mentioned in Donald Norman’s talk last year in Leuven. There was also a store of the future and a kitchen of the future. The presentations and film fragments of the talks are going to be put online soon. If I don’t forget, I’ll update this post with a link to the material.

But even after the event our public transport nightmare wasn’t over I entered some information wrongly on the travel planner of De Lijn, so the bus we wanted to take back to the station didn’t drive until after our train left. Luckily Geert Houben (another colleague) came by car and dropped us off at Vilvoorde station in time. So then we went back from Vilvoorde to Brussels North, where we took the train to Genk. But not before having an unhealthy, but satisfying snack

Geert

Fast food on the train

Using a Wiimote to realize the Minority Report user interface

Via Gizmodo:

This Wiimote hack is one of the more astounding mods we’ve seen to Nintendo’s pride and joy, but even more remarkably, it’s really only taking advantage of the Wiimote’s IR and Bluetooth capabilities to create what may be the multitouch mecca — multitouch without the touch. So would you wear little reflective rings on your fingers to have tactile control of your television screen? We would. In a heartbeat. And then we’d call Captain Planet to kick some ass when we’re finished watching 30 Rock.

Very cool stuff. Since almost anyone at our institute has a Wii nowadays (including me), this should not be too hard to create ourselves.

[youtube:http://www.youtube.com/watch?v=0awjPUkBXOU]

The author of the video is Johnny Lee and works at Carnegie-Mellon. Just had a quick look through his impressive list of publications (UIST, SIGGRAPH, DIS, CHI, etc.), and found an interesting paper on how one can predict the task a user is currently performing by analyzing his EEG signals. This one is on my reading list about general sensing techniques (I hope I find some time soon to start reading papers again).

Interesting talks coming up

I’m attending two interesting talks this month (together with some colleagues).

Adam Greenfield is coming to Leuven on November 27th. He wrote the book Everyware: the dawning age of ubiquitous computing and gave a keynote at Pervasive this year.

Adam Greenfield

Next Monday, I’m going to Living Tomorrow in Vilvoorde for a session on the Internet of Things. I still have to figure out some issues with the registration though.

Living Tomorrow

VR: after the hype

Lode wrote in his last post (amongst others) about the fact that the hype of Virtual Reality is over. This doesn’t have to be negative in my opinion. Maybe having a fresh (and more realistic?) view on virtual reality and its possible uses can help.

As a comparison, look at the original promise of artificial intelligence (also called strong AI), versus the current, more realistic view (weak AI). Just as weak AI revived AI’s fortunes, Yvonne Rogers believes that Ubicomp research that enables people to become smart and proactive instead of focusing on a smart environment as in the original vision by Weiser can help bring success to the field.

Speaking of ubiquitous computing, I think that research in ubiquitous computing and more natural forms of interaction can benefit in some part from the previous work in Virtual Reality. Virtual Reality provided a way to interact with a three-dimensional world instead of using the traditional keyboard and mouse (albeit a virtual world), while one of the goals of ubiquitous computing is to interact in a natural way with the real world (which is of course three-dimensional).

Lode also referred to the Reality-Virtuality (RV) Continuum, which I hadn’t heard of yet. It will certainly be interesting to have a look at. I think it all depends on how you define things. Mark Weiser for example referred to ubiquitous computing as the opposite of Virtual Reality, namely embodied virtuality.

Ubicomp 2007: day two and three

I finally found some time to go through my notes from Ubicomp 2007. Since I already blogged about the first day I’m going to start this overview on Tuesday. This is not a complete overview, but just a list of talks that I found interesting.

The first session had a nice talk titled “My Roomba is Rambo” which studied why people got emotional about their appliances, and why we should care. This is similar to what Philips did with the iCat. Apparantly people seemed to forgive their appliances when they made mistakes, given that they were emotionally attached to them (e.g. helping a Roomba that got stuck).

The next session on location featured an interesting talk by David Dearman on a method to predict location errors. They evaluated their system by letting people locate posters as fast as possible, while varying the location error and using different algorithms to estimate the error, including their own. There were a lot of talks on security, including one in this session on security by spatial reference by Rene Mayrhofer. He made an interesting point, that the methods of security and authentication we use today (e.g. passwords) are inpractical for ubicomp environments.

Shwetak Patel presented his work on Tuesday as well. He received the best paper and best talk award. His idea was very innovative, namely to check for noise on the power lines in a house to detect activity (e.g. opening the microwave would turn on a light which could be detected). The system is quite accurate, although portable devices could be more difficult to support since a training period is required. In the same session there was another security talk on shaking two devices together and thereby generating a unique key for authentication. This illustrates that there were definitely a lot of creative ideas at Ubicomp.

Tuesday evening we had the conference dinner up in the mountains which was quite nice (with an Austrian traditional band that played all kinds of music, including Tom Jones), but it was very cold up there

Wednesday started with a talk by Tim Kindberg (of Cooltown fame) titled “Merolyn the Phone: a study of Bluetooth naming practices”. He started off with a slide that showed a list of names of detected Bluetooth phones in the conference room. Apparently, the people that featured in his study were more creative than we were (I was guilty as well with the not very original name “Jo’s K750i”). The story behind the name Merolyn the phone was pretty funny as well.

Next was a talk by Yvonne Rogers, of whom I read a very interesting article last year (after it was mentioned on Fabien’s blog). The talk was basically about how Ubicomp technology cannot be evaluated in a lab setting, and needs real-world testing.

Another interesting talk in this session discussed the Whereabouts clock, which reminded me vaguely of the AmbientClock. In the session on privacy Karen P. Tang (if I’m not mistaken) presented privacy controls in IMBuddy, a contextual instant messenger. They allowed people to disclose information at different levels of granularity and get notified when someone queried their presence.

In my opinion, the best presentation was given by Scott Davidoff, who presented speed dating as a method to quickly evaluate different design decisions. His slides are online at Slideshare.net.

The final talk by David Molyneaux showcased an impressive steerable projector system. The innovative part (according to my understanding) was that objects stored and controlled their data (e.g. sensor readings) and metadata (e.g. 3D model) themselves, and decided when to send this to the projector. For example, when two objects with the same appearance are in a room, and one is moving and the other one isn’t (detected with accelerometers), they notify the projector which can then distinguish between them. When an object’s geometry is changed (e.g. when a book is opened), it detects this through sensors and accordingly sends its updated 3D model to the projector.

All in all, the conference was very interesting as was the workshop.