STRP festival redux

This is a quick update on my previous post about the STRP festival. Apparently, the technology expo continued at night, so I was able to see some of the demonstrations anyway. I wanted to share a few of them just to give an impression.

There were a set of large screens mounted in the expo that displayed a video of a woman’s face together with a power meter. When the lady smiles (no pun intended) , the power bar is green and filled to the top. When she stops smiling, it drops to the bottom and turns red. It took me a while to get the meaning of this power meter since the women only stop smiling once in a while There is a video available that gives some more details.

Face recognition (smile detection) @ STRP

I also had the chance to experiment with the i_AM table which was not very impressive in my opinion. Although it was definitely more simple than the reactables table, as a consequence it did not offer much functionality. Each object that was placed on the table was linked to a sample (e.g. a guitar loop) with a certain volume depending on its position on the table. When an object was pushed up or down on the table, its volume changed from loud to quiet. When it was moved to the left or to the right, it would become linked to another sample. When you turned an object, the portion of the sample that was repeated could be altered. I did not find the mappings logical, but then again the problem with these kinds of systems is that there has to be a way for the user to find out about the object’s affordances. This can be done by using objects that represent their affordances explicitly, or by displaying something helpful next to the objects. The i_AM table did not offer a way to find out about an object’s functionality.

i_AM demo @ STRP

I only took a few pictures, and most of them were blurry due to the low quality camera on my cellphone but there are quite some pictures of the festival available at Flickr, including the demo of Johannes (it seems his demo was also covered by DJ BROADCAST and Eindhovens Dagblad). Another cool concept was Vinyl workout where a record was projected on the floor and could be played by running around its surface in the direction you wanted it to go. Motor karaoke was a demo I didn’t visit, but which would have been fun to try. It is a bike race where the motorcycle is powered by the player’s voice. The louder the player screams, the faster the bike will go

Oh and the concerts were good as well

STRP festival

Time passed quickly this week, it’s Friday evening already. Tonight I am going to the STRP festival in Eindhoven.

STRP festival logo

STRP (pronounced “strijp” in Dutch) is a four-day festival that can be seen as a combination of a technology exposition and a music festival. Yesterday The Chemical Brothers opened the festival. Today, Radio Soulwax, Roisin Murphy, Mr. Oizo and Goose are amongst others performing.

I actually found out about this festival through Last.fm which mails me when there is an event nearby with artists I like. I had a look at the day program and found out that Johannes Taelman (one of our new colleagues) will be presenting a system he built. Another interesting project at the festival is i_AM, which is similar to the reactable and Audiopad projects. According to the authors of i_AM, the main difference with other projects that allow music composition through tangible interaction is that their system is more usable for novices. Unfortunately, I won’t arrive in Eindhoven until somewhere around midnight so I won’t be able to check out the technology demonstrations.

The concerts should be good, I’m looking forward to seeing the Dewaele Brothers.

Beyond the desktop metaphor: Lifestreams and Haystack

I spent part of my lazy Sunday on reading a few articles in Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments, a book that Kris dropped on my desk a few weeks ago. It gives an overview of the state-of-the-art in integrated digital work environments and is edited by Victor Kaptelinin and Mary Czerwinski.

Beyond the Desktop Metaphor

I went through the chapters on Lifestreams by Eric Freeman and David Gelernter and Haystack by David R. Karger.

Lifestreams was an alternative to the desktop metaphor that was developed starting in 1994 and aimed to be a better way to organize your personal electronic information. One of the primary motivations for this work are the limitations of a static (hierarchical) filesystem. The problem with organizing our documents in the filesystem hierarchy is that information generally falls into fuzzy categories and that it is impossible for users to generate categories which remain unambiguous over time. Furthermore, users are forced to name their files, which often results in meaningless file names such as “draft1.doc” and “draft2.doc”. Names are an ineffective way of categorizing information, since their value decays over time. Traditionally, people do not name their documents as pointed out by Thomas Malone in his paper How do people organize their desks? Implications for the design of office information systems. He noticed that people often just create nameless stacks of related documents on their desk. Freeman and Gelernter discuss a few other problems with the desktop metaphor, such as no support for archiving, reminding and summarizing. The desktop metaphor does not make it easy to archive information, to put information somewhere we can later retrieve it but also remove it from our periphery. Users often place information on their desktop to remind them of tasks to do or leave an email in their inbox to remind them that they still need to reply to it. As the desktop has no semantic notion of reminding, users are just working around the system. Finally, summaries are needed in order to cope with all our electronic information. The authors state that summaries are often application-centric (e.g. an overview of your photo albums, an summary of your music, etc.), instead of system-wide.

I found it interesting that the authors do not see their architecture as another metaphor, but as a unified idea or system. They refer to Nelson’s concept of virtuality as opposed to metaphorics. Nelson (who also coined the term hypertext) argues that adherence to a metaphor prevents the emergence of things that are genuinely new. Trying to adhere to a metaphor may lead to strange results when new functions are added, for example having the drag a CD icon to the trash to eject it on Mac OS X.

A lifestream is a time-ordered stream of documents that functions as a diary of a user’s electronic life. Every document he or she creates is stored in the lifestream. Moving forward from the tail to the present, the stream contains more recent documents. Moving beyond the present into the future, the stream contains documents that the user will need (e.g. reminders, calendar items, etc.). The system has a few primitive operations that together support transparent storage, organization through directories on demand, archiving, reminding and summaries: new, copy, find and summarize. New and copy are used to create or copy documents in the lifestream or between lifestreams. Documents do not have to be named. The find operation allows users to search their documents. It creates a substream with the results of the query. These substreams are not static, but are updated on the fly whenever new documents that are relevant to their query appear. Users can allow substreams to persist, in order to quickly find information they need regulary (e.g. “emails from Joe”). Finally, summarize compresses a substream into an overview document. The method of summarizing varies according to the content of the substream (e.g. a music playlist, a prioritized to-do list, etc.). The figure below shows the Lifestreams user interface:

Lifestreams

It’s interesting to see that many of the ideas first explored in Lifestreams are currently supported by several applications. Archiving was one of Gmail‘s defining characteristics (“never lose a message again!”) when it was first released. Apple’s iApps such as iTunes offer summarization, dynamic substreams (“smart playlists”) and time-based visualizations. Desktop search tools such as Google Desktop, Apple Spotlight and Beagle offer a way to quickly find items on your computer. Some of them also offer saved searches (which is again similar to “dynamic substreams”). The authors also discuss this evolution. However, they feel that desktop search, while definitely a step in the right direction, is not sufficient. It only works if you know what to look for. People really need good browse engines instead of search engines. This statement is also made in the next chapter on Haystack where it is called orienteering.

Haystack can be seen as a generalization of Lifestreams. Haystack is a way to visualize and organize a user’s information, but does not restrict the visualization and categorization to be time-based. The authors try to find a solution for the fact that current applications force users to manage information in the way that the application designer envisioned it. This might not be the most natural way for the users, so Haystack gives the users more control over what kinds of information they store and how to visualize and manage it. In traditional email applications for example we can only categorize by the labels that are predefined (e.g. sender, subject. etc.), but not by our own features such as “needs to be handled by such-and-such a date”. The information may even be in the application, but no appropriate interface is offered to use it. Furthermore, every application manages its own data independently while we might want to relate data from different applications together (e.g. emails, articles, blog posts, pictures, songs, people, etc.). A user might also want to add a new data type. Consider the location field in a calendar event: this is just a string, while the user might want a richer presentation (Google Calendar can do this by linking to Google Maps by the way). Existing applications are very bad at extending existing types, since they offer no way of displaying the type, no operations for acting on it and no way of connecting them to other information objects in the application.

Haystack has a generic user interface architecture that supports impressive personalization. Users can for instance create a new “Send to Joe” operation by filling in part of the “Send to” operation, and saving it. Objects can be dragged upon each other to connect them: dragging an object onto a collection adds it to the collection, while dragging an object onto a dialog box arguments binds that argument to the dragged item.

Haystack

Custom workspaces can be constructed by drag and drop. The figure below shows a workspace specialized for writing a particular research paper, presenting amongst others relevant references, coauthors and outstanding to-do’s.

Haystack workspace

The system uses Semantic Web technology (more specifically RDF and URIs) to represent information objects, their attributes and relationships to other information objects. However, they do not enforce schema such as RDFS or OWL) in order to allow users to organize information the way they want. It is after all difficult to create an ontology that serves everyone’s needs. Consider for example the composer attribute of a symphony concept. A reasonable constraint is to restrict composers to be people. But this will prevent a user that is interested in computer music from entering a particular computer program as the composer. The authors state that schemata may be of great advisory value, but they argue against enforcing them. Apparently this is also known as a semi-structured data model.

I think this is the most impressive Semantic Web application I have seen, although I am also looking forward to test Twine and Powerset. I have barely touched upon everything that Haystack can do in this blog post so if you are not yet convinced, have a look at a paper that is pretty similar to the book chapter. The level of customization supported by Haystack reminded me of the Meta-UI concept (which I see as a user interface to manipulate an interactive system or its user interface) as discussed by Coutaz at Tamodia’06.

Although Lifestreams and Haystack would certainly improve the way we manage our data, I feel they both ignore an important type of information: information in the physical world. After all, a substantial amount of the information we process is non-digital. Last year, I had a project proposal for the course Actuele Trends in HCI (translated: “Current trends in HCI“) on improving the way we work with digital and physical information. Given that the students had little time for this project, the result was pretty nice.

Pluggable typedecoders for Uiml.net

I spent some time the last weeks to support type decoding plugins in Uiml.net. This is mainly useful when you want to interact with applications or web services that have their own types that need to be converted to something the widget set understands. Suppose for example a web service returns a set of Persons, which need to be represented in a list view. The renderer does not know how to transform a Person into an item of a list view, so you need to define a custom component that sits between the renderer and the web service, and can provide this conversion. However, since you don’t know which widget set is used, you have to do this for every possible widget set (e.g. System.Windows.Forms, Gtk#, System.Windows.Forms on the Compact Framework, etc.). Furthermore, it would be better to let the renderer manage this code.

So I created a type decoder plugin system and while I was at it, also cleaned up the code. This resulted in only one general TypeDecoder instance being created in the renderer, while we previously had one instance per backend. Now we have a container class in each backend to host widget set-specific type decoders. This container class get registered with the TypeDecoder, and is in fact also a plugin.

Instead of going into the implementation details, let’s have a look at an excerpt from the System.Windows.Forms container class (SWFTypeDecoders.cs):

37
38
39
40
41
42
43
44
45
46
47
using Uiml.Rendering;
using Uiml.Rendering.TypeDecoding;
 
public class SWFTypeDecoders
{	    
    [TypeDecoderMethod]
    public static System.Drawing.Point DecodePoint(string val)
    {
        string[] coords = val.Split(new Char[] {','});
        return new System.Drawing.Point(Int32.Parse(coords[0]), Int32.Parse(coords[1]));
    }

The only thing we have to do to define a type decoder method is add the [TypeDecoderMethod] attribute and support as a parameter the type we want to convert from. The return type is what we will convert to. In the above listing, the DecodePoint method converts a string to a System.Drawing.Point. The [TypeDecoderMethod] attribute is used to declare that the corresponding method is a type decoder. This way other auxiliary methods will not be registered and won’t pollute the type decoder registry.

To test the implementation, I created a simple class that connects to del.icio.us and gets all my tags. I use this class to show them in a Gtk# UIML GUI. To be able to convert between the XML document that is returned by del.icio.us and the user interface, I wrote a custom type decoder, and connected it to the renderer. I also have a short screencast showing its workings.

Pluggable typedecoders example

I have extended the Uiml.net type decoder to combine existing type decoding methods if direct conversion is not supported. In this example I created a type decoder to convert from a System.Xml.XmlDocument to a Uiml.Constant. But the Gtk.TreeView widget requires a Gtk.TreeModel. The renderer will therefore look for a conversion from a Uiml.Constant to a Gtk.TreeModel and apply the type decoders in sequence. Although we could have converted directly to this data type, this is not as flexible since it is widget set-specific. Although the interface will remain the same, I will probably change the underlying implementation to a graph with the types as vertices, and type decoders as edges to better support these indirect conversions.

Living Tomorrow and public transport adventures

Yesterday I went to the FITCE event on the Internet of Things I blogged about earlier together with my colleague Geert Vanderhulst. At first, I wanted to go by car, but then I realized that meant going through the rush hour on the Brussels ring road. Eventually, we decided to take the train to Leuven, take another one from Leuven to Brussels North station, and from there take the train to Vilvoorde. Unfortunately there were some difficulties with the last element in this chain

Apparently there was a train that arrived at the exact same time and at the same platform at Brussels North station as the train we were supposed to take. This train also went to Antwerp Central Station, but had Amsterdam as its final destination. When we were on it, we realized too late that it didn’t stop in Vilvoorde. So we got out in Mechelen (the first stop) and took another train to Vilvoorde. Normally this train would be in Vilvoorde on time to allow us to take the bus to Living Tomorrow, but this evening it changed to an L train, meaning that it stops at every station on its way. When we finally arrived in Vilvoorde, the last bus to Living Tomorrow before 19:00 had already left. The next one was at 20:15. After asking a bus driver, we found another bus that stopped close to the venue (bus 47). After taking this bus, we finally arrived somewhere in the neighborhood of the Indringingsweg, but didn’t know where to go. Of course, then it started raining Luckily Geert had his satellite navigation system with him to lead us the way. When we finally arrived in the room, we had to pass by the speakers and all the lights went on, so we couldn’t make an unremarkable entrance

So what about the talks? Although it’s always interesting to see how people appreciate ubicomp technologies when they get integrated in their daily lives, I didn’t learn anything really new. A lot of the technologies or prototypes that were mentioned were familiar to me. One of the things I hadn’t heard about yet were washable RFID tags.

After the talks we got a tour through the house of the future. Again, a couple of the technologies they showed had already been integrated in real-life products or were already well investigated in research. There was a prototype by Volvo about parking sensors, dead angle cameras, lane tracking and a system to avoid collisions in traffic jams. The more advanced technologies here were mentioned in Donald Norman’s talk last year in Leuven. There was also a store of the future and a kitchen of the future. The presentations and film fragments of the talks are going to be put online soon. If I don’t forget, I’ll update this post with a link to the material.

But even after the event our public transport nightmare wasn’t over I entered some information wrongly on the travel planner of De Lijn, so the bus we wanted to take back to the station didn’t drive until after our train left. Luckily Geert Houben (another colleague) came by car and dropped us off at Vilvoorde station in time. So then we went back from Vilvoorde to Brussels North, where we took the train to Genk. But not before having an unhealthy, but satisfying snack

Geert

Fast food on the train