Tag: internet of things

Missed a talk by Nicolas Nova in Brussels

I found out a bit too late that Nicolas Nova would be giving a talk at iMAL in Brussels yesterday. Luckily he always puts his slides online

Nicolas Nova

The talk also explained his (seemingly random) blog title: “Pasta&Vinegar”. He states that the hybridization of digital and physical environments is explored both by academic researchers (pasta) and artists and designers (vinegar). In the talk at iMAL he talked about why vinegar is important for pasta

His slides contain lots of interesting and creative ideas, such as blogjects, augmenting animals (e.g. a dog with sensors that controls a WoW character) and a tooth implant that vibrates when you have an incoming call.

If you want to invent something that is to be used 10 years from now, who can you observe? Nicolas states that looking at new media, art and design can give us clues. He also explains that art and design can better convey desire of people for the future, and shows a typical diagram from an IT company that is not appealing to people and too much focused on the technology in the background. He finally refers to the use of technology in art. SIGGRAPH’s Emerging Technologies and Art Gallery are good examples of this and of combining pasta and vinegar.

Making things talk

A few weeks ago I came across a blog post by Cati Vaucelle about Making Things Talk, the new book by Tom Igoe. The book deals with building smart, communicating things. It is built up out of specific projects and uses practical examples to explain different technologies. Tom works at NYU ITC (where Adam Greenfield also works).

Through a series of simple projects, this book teaches you how to get your creations to communicate with one another by forming networks of smart devices that carry on conversations with you and your environment. Whether you need to plug some sensors in your home to the Internet or create a device that can interact wirelessly with other creations, Making Things Talk explains exactly what you need.

The book seemed really useful to me to learn how to build smart things and prototype a ubicomp environment. Unfortunately I was never really exposed to electronics, so this might be a good way to catch up I pointed Kris at the book who ordered a copy afterwards. I had a quick look at it, and I must say it is well-written and fun to read. You need some hardware to really dive in though.

Making Things Talk

The author uses Processing and Arduino as the basic building blocks. I was pleasantly surprised that the programming environment works perfectly under Mac OS X and GNU/Linux (while it also supports Windows). I would also like to experiment with it at home, for instance to build a remote-controlled mood light Apparently a Wii Nunchuk is also pretty popular for connecting to Arduino as it sports a 3-axis accelerometer, joystick and two buttons for under 20$ and uses the I2C protocol.

Adam Greenfield’s talk in Leuven

On Tuesday I went to the talk of Adam Greenfield in Leuven, organized by the Microsoft Research Chair on Intelligent Environments. The main topic of his talk was the social and ethical implications of ubiquitous computing. Adam started his talk by saying that there are a lot of ubicomps. He uses the term everyware to cover Weiserian ubicomp, pervasive computing, tangible media and ambient intelligence. Everyware is free from the baggage of Xerox PARC, free from politics and easy to understand. He defines it as distributed, networked information processing resources that are embedded in the environment. Adam sees everyware as inherently multi-disciplinary. He works at the NYU Interactive Telecommunications Program which hosts both people with an artistic and a technical background.

Everyware

Similar to Bell and Dourish Greenfield claims that the first stage of ubicomp is already here. Technologies such as RFID and NFC together with devices such as the iPod and iPhone are already ubiquitous computing devices. However, the way we interact with them has not yet changed significantly. He says we already have robust information processing in our environment today (and that this can be seen in standards such as IPv6 and in devices such as Proliphix Network Thermostats). An interesting point he made was that when he started giving these talks 18 months ago, most of the examples he used were research prototypes, but now most examples are commercial products. Furthermore, adoption of these new technologies and products is unproblematic. In Hong Kong, 95% of the people between 16 and 65 used the new Octopus RFID metro pass system.

Concerning the consequences of everyware, he referred to the Panopticon, an 18th century prison that was optimized for surveillance. The guards could see the prisoners’ cells at any time, while the prisoners could never see the guards. The prisoners’ default state was to be monitored, so they acted accordingly. So this might happen as well with everyware that is watching us and sending information out. We may get used to it, and just watch our steps more closely. Here is an example of such as prison (image courtesy of Wikipedia):

Presidio

Greenfield talked about the design of everyware, and referred to “design dissolving in behavior” by Naoto Fukasawa. It comes down to closely looking at people’s everyday behavior and trying to improve it with a solution that is as simple as possible. Design has to achieve an object without thought. People shouldn’t have to think about an object when using it. This also came up during the DIPSO workshop, and is a feature that was lacking from the i_AM table (How do you use it and what can you do with it?). As an example, Greenfield referred to the Octopus transit system again which allows you to quickly pass by the metro gate with an RFID-tagged metro pass in your pocket or bag since the reader’s range is large enough to read it as you walk by.

He continued with future issues and mentioned inadvertent, unknowing and unwilling use of everyware. The first issue can occur when you mistakenly publish your location to the whole world instead of to your closest friends. The cost of inadvertent use rises with everyware. Unknowing use might occur when a user walks over a sensor on the ground that recognizes when someone is walking on it, but the user does not know that he can later be identified since we all have a unique walking pattern. Finally, unwilling use can occur when people don’t want to use everyware technology but are forced to do so, e.g. you may need to use an RFID-enabled metro pass to get on the metro in Hong Kong. He also briefly discussed security (when all objects are connected one object might trigger behavior in another object, e.g. a failure could make your automatic garage door go up and down), and the digital burden of having to deal with your digital traces (Should I postulate this query? Can it be traced?).

Finally, Greenfield said it’s time to take everyware seriously and proposed 5 principles to design everyware:

  1. Be harmless
  2. Be self-disclosing
  3. Be conservative of face
  4. Be conservative of time
  5. Be deniable

Harmlessness refers to safety, everyware should always try to ensure users’ safety (physical, psychic, financial). It is graceful degradation taken further. The second principle refers to smart objects announcing their functionality. Greenfield proposes to use icons to indicate data collection, support for gestural interfaces or self-describing objects:

Information collected Gestural interface Self-describing object

The third principle means that everyware should not necessarily embarass, humiliate or shame its users. Society has a necessary membrane of protective hypocrisy according to Greenfield. Examples include the strict categorisations used on social networking websites such as Flickr (e.g. friends or family), or what happened to Robert Scoble on Facebook a while ago.

Principle 4 refers to everyware not introducing undue complication into ordinary operations (this is what Weiser actually referred as invisible computing in my opinion). Everyware should not take over, and should assume that an adult, competent user knows what he or she is doing. Finally, users should always be given the ability to opt out (principle 5), with no penalty other than the inability to make use of the functionality that the ubicomp system offers. I believe that the last two principles cannot be realized by systems that make all the decisions for the user. An approach such as mixed-initiative interaction might be more appriopriate.

Adam also talked about the cultural differences in safety and social status in Europe, the US and Asia. For one, in Asia is it common to have talking doors and elevators (which Lode also noticed on his trip to Japan), while this would drive most European people nuts Everyware is by consequence not universal, and should take into account the cultural conventions of the country or region where it is deployed.

All in all, I found the talk very interesting. Although most principles seem obvious, I have seen a fair share of ubicomp systems that violate them. I especially liked his proposal for everyware icons. People are coming up with unique names for Wii gestures which might also help in announcing how to interact with a system.

I really like the Intelligent Environments initiative as it gave me the opportunity to see talks by Donald Norman, Boris De Ruyter (he replaced Emile Aarts) and now Adam Greenfield. The next speaker will be Kevin Warwick so that promises to be interesting as well (have a look at his homepage or a Wikipedia article about him)

Interesting talks coming up

I’m attending two interesting talks this month (together with some colleagues).

Adam Greenfield is coming to Leuven on November 27th. He wrote the book Everyware: the dawning age of ubiquitous computing and gave a keynote at Pervasive this year.

Adam Greenfield

Next Monday, I’m going to Living Tomorrow in Vilvoorde for a session on the Internet of Things. I still have to figure out some issues with the registration though.

Living Tomorrow

Object recognition with video phones

Andrea Gaggioli blogged about the Pocket Supercomputer by Accenture. The original article was published by NewScientistTech:

Live video footage is fed from the handset to a central server, which rapidly matches on-screen objects to images previously entered into a database. The server then sends find relevant information and sends it back to user (…) The central server uses an algorithm called the Scale-Invariant Feature Transform to match objects. The algorithm uses hundreds or thousands of reference points, corresponding to physical features such as edges, corners or lettering, to find a match. The process works no matter how the object is oriented, but objects must first be carefully imaged and entered into the central database.

[youtube:http://www.youtube.com/watch?v=PkqUjQj8H3M]

This is certainly a step forward compared to RFID and 2D barcodes such as Semacodes or QR codes. It reminded me of Atom tags that could recognize existing logo’s and also used server-side shape analysis and pattern recognition.

[youtube:http://www.youtube.com/watch?v=B_7Yy-zQiRo]

Unlike these two techniques, the existing 2D barcodes are not human-readable.