Tag: ubicomp

Missed a talk by Nicolas Nova in Brussels

I found out a bit too late that Nicolas Nova would be giving a talk at iMAL in Brussels yesterday. Luckily he always puts his slides online

Nicolas Nova

The talk also explained his (seemingly random) blog title: “Pasta&Vinegar”. He states that the hybridization of digital and physical environments is explored both by academic researchers (pasta) and artists and designers (vinegar). In the talk at iMAL he talked about why vinegar is important for pasta

His slides contain lots of interesting and creative ideas, such as blogjects, augmenting animals (e.g. a dog with sensors that controls a WoW character) and a tooth implant that vibrates when you have an incoming call.

If you want to invent something that is to be used 10 years from now, who can you observe? Nicolas states that looking at new media, art and design can give us clues. He also explains that art and design can better convey desire of people for the future, and shows a typical diagram from an IT company that is not appealing to people and too much focused on the technology in the background. He finally refers to the use of technology in art. SIGGRAPH’s Emerging Technologies and Art Gallery are good examples of this and of combining pasta and vinegar.

Making things talk

A few weeks ago I came across a blog post by Cati Vaucelle about Making Things Talk, the new book by Tom Igoe. The book deals with building smart, communicating things. It is built up out of specific projects and uses practical examples to explain different technologies. Tom works at NYU ITC (where Adam Greenfield also works).

Through a series of simple projects, this book teaches you how to get your creations to communicate with one another by forming networks of smart devices that carry on conversations with you and your environment. Whether you need to plug some sensors in your home to the Internet or create a device that can interact wirelessly with other creations, Making Things Talk explains exactly what you need.

The book seemed really useful to me to learn how to build smart things and prototype a ubicomp environment. Unfortunately I was never really exposed to electronics, so this might be a good way to catch up I pointed Kris at the book who ordered a copy afterwards. I had a quick look at it, and I must say it is well-written and fun to read. You need some hardware to really dive in though.

Making Things Talk

The author uses Processing and Arduino as the basic building blocks. I was pleasantly surprised that the programming environment works perfectly under Mac OS X and GNU/Linux (while it also supports Windows). I would also like to experiment with it at home, for instance to build a remote-controlled mood light Apparently a Wii Nunchuk is also pretty popular for connecting to Arduino as it sports a 3-axis accelerometer, joystick and two buttons for under 20$ and uses the I2C protocol.

Adam Greenfield’s talk in Leuven

On Tuesday I went to the talk of Adam Greenfield in Leuven, organized by the Microsoft Research Chair on Intelligent Environments. The main topic of his talk was the social and ethical implications of ubiquitous computing. Adam started his talk by saying that there are a lot of ubicomps. He uses the term everyware to cover Weiserian ubicomp, pervasive computing, tangible media and ambient intelligence. Everyware is free from the baggage of Xerox PARC, free from politics and easy to understand. He defines it as distributed, networked information processing resources that are embedded in the environment. Adam sees everyware as inherently multi-disciplinary. He works at the NYU Interactive Telecommunications Program which hosts both people with an artistic and a technical background.

Everyware

Similar to Bell and Dourish Greenfield claims that the first stage of ubicomp is already here. Technologies such as RFID and NFC together with devices such as the iPod and iPhone are already ubiquitous computing devices. However, the way we interact with them has not yet changed significantly. He says we already have robust information processing in our environment today (and that this can be seen in standards such as IPv6 and in devices such as Proliphix Network Thermostats). An interesting point he made was that when he started giving these talks 18 months ago, most of the examples he used were research prototypes, but now most examples are commercial products. Furthermore, adoption of these new technologies and products is unproblematic. In Hong Kong, 95% of the people between 16 and 65 used the new Octopus RFID metro pass system.

Concerning the consequences of everyware, he referred to the Panopticon, an 18th century prison that was optimized for surveillance. The guards could see the prisoners’ cells at any time, while the prisoners could never see the guards. The prisoners’ default state was to be monitored, so they acted accordingly. So this might happen as well with everyware that is watching us and sending information out. We may get used to it, and just watch our steps more closely. Here is an example of such as prison (image courtesy of Wikipedia):

Presidio

Greenfield talked about the design of everyware, and referred to “design dissolving in behavior” by Naoto Fukasawa. It comes down to closely looking at people’s everyday behavior and trying to improve it with a solution that is as simple as possible. Design has to achieve an object without thought. People shouldn’t have to think about an object when using it. This also came up during the DIPSO workshop, and is a feature that was lacking from the i_AM table (How do you use it and what can you do with it?). As an example, Greenfield referred to the Octopus transit system again which allows you to quickly pass by the metro gate with an RFID-tagged metro pass in your pocket or bag since the reader’s range is large enough to read it as you walk by.

He continued with future issues and mentioned inadvertent, unknowing and unwilling use of everyware. The first issue can occur when you mistakenly publish your location to the whole world instead of to your closest friends. The cost of inadvertent use rises with everyware. Unknowing use might occur when a user walks over a sensor on the ground that recognizes when someone is walking on it, but the user does not know that he can later be identified since we all have a unique walking pattern. Finally, unwilling use can occur when people don’t want to use everyware technology but are forced to do so, e.g. you may need to use an RFID-enabled metro pass to get on the metro in Hong Kong. He also briefly discussed security (when all objects are connected one object might trigger behavior in another object, e.g. a failure could make your automatic garage door go up and down), and the digital burden of having to deal with your digital traces (Should I postulate this query? Can it be traced?).

Finally, Greenfield said it’s time to take everyware seriously and proposed 5 principles to design everyware:

  1. Be harmless
  2. Be self-disclosing
  3. Be conservative of face
  4. Be conservative of time
  5. Be deniable

Harmlessness refers to safety, everyware should always try to ensure users’ safety (physical, psychic, financial). It is graceful degradation taken further. The second principle refers to smart objects announcing their functionality. Greenfield proposes to use icons to indicate data collection, support for gestural interfaces or self-describing objects:

Information collected Gestural interface Self-describing object

The third principle means that everyware should not necessarily embarass, humiliate or shame its users. Society has a necessary membrane of protective hypocrisy according to Greenfield. Examples include the strict categorisations used on social networking websites such as Flickr (e.g. friends or family), or what happened to Robert Scoble on Facebook a while ago.

Principle 4 refers to everyware not introducing undue complication into ordinary operations (this is what Weiser actually referred as invisible computing in my opinion). Everyware should not take over, and should assume that an adult, competent user knows what he or she is doing. Finally, users should always be given the ability to opt out (principle 5), with no penalty other than the inability to make use of the functionality that the ubicomp system offers. I believe that the last two principles cannot be realized by systems that make all the decisions for the user. An approach such as mixed-initiative interaction might be more appriopriate.

Adam also talked about the cultural differences in safety and social status in Europe, the US and Asia. For one, in Asia is it common to have talking doors and elevators (which Lode also noticed on his trip to Japan), while this would drive most European people nuts Everyware is by consequence not universal, and should take into account the cultural conventions of the country or region where it is deployed.

All in all, I found the talk very interesting. Although most principles seem obvious, I have seen a fair share of ubicomp systems that violate them. I especially liked his proposal for everyware icons. People are coming up with unique names for Wii gestures which might also help in announcing how to interact with a system.

I really like the Intelligent Environments initiative as it gave me the opportunity to see talks by Donald Norman, Boris De Ruyter (he replaced Emile Aarts) and now Adam Greenfield. The next speaker will be Kevin Warwick so that promises to be interesting as well (have a look at his homepage or a Wikipedia article about him)

Beyond the desktop metaphor: Lifestreams and Haystack

I spent part of my lazy Sunday on reading a few articles in Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments, a book that Kris dropped on my desk a few weeks ago. It gives an overview of the state-of-the-art in integrated digital work environments and is edited by Victor Kaptelinin and Mary Czerwinski.

Beyond the Desktop Metaphor

I went through the chapters on Lifestreams by Eric Freeman and David Gelernter and Haystack by David R. Karger.

Lifestreams was an alternative to the desktop metaphor that was developed starting in 1994 and aimed to be a better way to organize your personal electronic information. One of the primary motivations for this work are the limitations of a static (hierarchical) filesystem. The problem with organizing our documents in the filesystem hierarchy is that information generally falls into fuzzy categories and that it is impossible for users to generate categories which remain unambiguous over time. Furthermore, users are forced to name their files, which often results in meaningless file names such as “draft1.doc” and “draft2.doc”. Names are an ineffective way of categorizing information, since their value decays over time. Traditionally, people do not name their documents as pointed out by Thomas Malone in his paper How do people organize their desks? Implications for the design of office information systems. He noticed that people often just create nameless stacks of related documents on their desk. Freeman and Gelernter discuss a few other problems with the desktop metaphor, such as no support for archiving, reminding and summarizing. The desktop metaphor does not make it easy to archive information, to put information somewhere we can later retrieve it but also remove it from our periphery. Users often place information on their desktop to remind them of tasks to do or leave an email in their inbox to remind them that they still need to reply to it. As the desktop has no semantic notion of reminding, users are just working around the system. Finally, summaries are needed in order to cope with all our electronic information. The authors state that summaries are often application-centric (e.g. an overview of your photo albums, an summary of your music, etc.), instead of system-wide.

I found it interesting that the authors do not see their architecture as another metaphor, but as a unified idea or system. They refer to Nelson’s concept of virtuality as opposed to metaphorics. Nelson (who also coined the term hypertext) argues that adherence to a metaphor prevents the emergence of things that are genuinely new. Trying to adhere to a metaphor may lead to strange results when new functions are added, for example having the drag a CD icon to the trash to eject it on Mac OS X.

A lifestream is a time-ordered stream of documents that functions as a diary of a user’s electronic life. Every document he or she creates is stored in the lifestream. Moving forward from the tail to the present, the stream contains more recent documents. Moving beyond the present into the future, the stream contains documents that the user will need (e.g. reminders, calendar items, etc.). The system has a few primitive operations that together support transparent storage, organization through directories on demand, archiving, reminding and summaries: new, copy, find and summarize. New and copy are used to create or copy documents in the lifestream or between lifestreams. Documents do not have to be named. The find operation allows users to search their documents. It creates a substream with the results of the query. These substreams are not static, but are updated on the fly whenever new documents that are relevant to their query appear. Users can allow substreams to persist, in order to quickly find information they need regulary (e.g. “emails from Joe”). Finally, summarize compresses a substream into an overview document. The method of summarizing varies according to the content of the substream (e.g. a music playlist, a prioritized to-do list, etc.). The figure below shows the Lifestreams user interface:

Lifestreams

It’s interesting to see that many of the ideas first explored in Lifestreams are currently supported by several applications. Archiving was one of Gmail‘s defining characteristics (“never lose a message again!”) when it was first released. Apple’s iApps such as iTunes offer summarization, dynamic substreams (“smart playlists”) and time-based visualizations. Desktop search tools such as Google Desktop, Apple Spotlight and Beagle offer a way to quickly find items on your computer. Some of them also offer saved searches (which is again similar to “dynamic substreams”). The authors also discuss this evolution. However, they feel that desktop search, while definitely a step in the right direction, is not sufficient. It only works if you know what to look for. People really need good browse engines instead of search engines. This statement is also made in the next chapter on Haystack where it is called orienteering.

Haystack can be seen as a generalization of Lifestreams. Haystack is a way to visualize and organize a user’s information, but does not restrict the visualization and categorization to be time-based. The authors try to find a solution for the fact that current applications force users to manage information in the way that the application designer envisioned it. This might not be the most natural way for the users, so Haystack gives the users more control over what kinds of information they store and how to visualize and manage it. In traditional email applications for example we can only categorize by the labels that are predefined (e.g. sender, subject. etc.), but not by our own features such as “needs to be handled by such-and-such a date”. The information may even be in the application, but no appropriate interface is offered to use it. Furthermore, every application manages its own data independently while we might want to relate data from different applications together (e.g. emails, articles, blog posts, pictures, songs, people, etc.). A user might also want to add a new data type. Consider the location field in a calendar event: this is just a string, while the user might want a richer presentation (Google Calendar can do this by linking to Google Maps by the way). Existing applications are very bad at extending existing types, since they offer no way of displaying the type, no operations for acting on it and no way of connecting them to other information objects in the application.

Haystack has a generic user interface architecture that supports impressive personalization. Users can for instance create a new “Send to Joe” operation by filling in part of the “Send to” operation, and saving it. Objects can be dragged upon each other to connect them: dragging an object onto a collection adds it to the collection, while dragging an object onto a dialog box arguments binds that argument to the dragged item.

Haystack

Custom workspaces can be constructed by drag and drop. The figure below shows a workspace specialized for writing a particular research paper, presenting amongst others relevant references, coauthors and outstanding to-do’s.

Haystack workspace

The system uses Semantic Web technology (more specifically RDF and URIs) to represent information objects, their attributes and relationships to other information objects. However, they do not enforce schema such as RDFS or OWL) in order to allow users to organize information the way they want. It is after all difficult to create an ontology that serves everyone’s needs. Consider for example the composer attribute of a symphony concept. A reasonable constraint is to restrict composers to be people. But this will prevent a user that is interested in computer music from entering a particular computer program as the composer. The authors state that schemata may be of great advisory value, but they argue against enforcing them. Apparently this is also known as a semi-structured data model.

I think this is the most impressive Semantic Web application I have seen, although I am also looking forward to test Twine and Powerset. I have barely touched upon everything that Haystack can do in this blog post so if you are not yet convinced, have a look at a paper that is pretty similar to the book chapter. The level of customization supported by Haystack reminded me of the Meta-UI concept (which I see as a user interface to manipulate an interactive system or its user interface) as discussed by Coutaz at Tamodia’06.

Although Lifestreams and Haystack would certainly improve the way we manage our data, I feel they both ignore an important type of information: information in the physical world. After all, a substantial amount of the information we process is non-digital. Last year, I had a project proposal for the course Actuele Trends in HCI (translated: “Current trends in HCI“) on improving the way we work with digital and physical information. Given that the students had little time for this project, the result was pretty nice.

Interesting talks coming up

I’m attending two interesting talks this month (together with some colleagues).

Adam Greenfield is coming to Leuven on November 27th. He wrote the book Everyware: the dawning age of ubiquitous computing and gave a keynote at Pervasive this year.

Adam Greenfield

Next Monday, I’m going to Living Tomorrow in Vilvoorde for a session on the Internet of Things. I still have to figure out some issues with the registration though.

Living Tomorrow