Tag: hci

Belgian CHI Papers

It’s that time of the year again. This week, the annual CHI conference is taking place in Austin, Texas. While I’m not attending, I do try to follow the program somewhat through Twitter.

CHI is generally considered to be the most prestigious conference on Human-Computer Interaction, with acceptance rates between 20 and 25% (CHI currently has an overall acceptance rate of 23%: 2,930 papers were accepted out of 12,583 submissions). HCI research labs and individual researchers are often compared based on their track record for CHI (or the lack thereof). For example, Professor Jan Borchers of RWTH Aachen maintains a ranking of German universities based on the number of CHI archival publications (full papers or notes) since 2003.

Unfortunately, participation of Belgian universities and companies at CHI tends to be rather limited, especially with respect to these archival publications. Our lab, for example, got work-in-progress papers accepted before (e.g., Telebuddies), co-organized CHI workshops (e.g., User interface description languages for next generation user interfaces), and had people in the organizing committee, but we did not (yet) get a full paper or note accepted to CHI. I must say we don’t always try every year, though. On the other hand, we do publish at more specialized conferences, such as UIST (D-Macs), Pervasive (Situated Glyphs), 3DUI (Vanacken et al.), Tabletop/ITS (FluidPaint [video]), PERCOM (Pervasive Maps), INTERACT (Haesen et al.), MobileHCI (Luyten et al.), AVI (Gummy), and EICS (CAP3). Some of these more specialized conferences are considered to be as competitive and prestiguous as CHI. This is especially the case for UIST, but also for CSCW, DIS and Ubicomp & Pervasive (for the related area of ubiquitous computing). Since other scientific disciplines (e.g., physics, biology) are mostly focused on journals instead of conferences, some people explicitly mention the importance of top HCI conferences such as CHI and UIST in their resume.

What about other Belgian universities? In 2009, dr. David Geerts from CUO (KULeuven) had a full paper accepted to CHI. This was the first CHI archival publication from a Belgian institution in years. At that same edition of CHI, two researchers from Eindhoven University of Technology (TU/e) presented a scientometric analysis of the CHI proceedings up until 2008. Their analysis seems to indicate that the paper by Geerts was just the second Belgian archival paper at CHI. Indeed, Belgium has exactly 1 credit in the main proceedings up until 2008:

As far as I can tell, this refers to the paper at INTERCHI ‘93 by Jean Vanderdonckt and François Bodart of the Université Catholique de Louvain. Note that INTERCHI ‘93 was in fact a joint INTERACT+CHI conference (it was also the first CHI conference that was held outside North America).

Belgium’s neighbouring countries do a lot better in the analysis: the Netherlands have 17.17 credits, France has 27.03 credits and Germany is a clear winner with a score of 39.74 in the main proceedings. Belgium’s total number of credits per million inhabitants (which includes credits for extended abstracts — non-archival publications) is a bit higher than that of France, though (1.78 vs. 1.34).

Fortunately, the situation seems to be improving. Last year, KULeuven had another 2 archival papers accepted to CHI 2011: a note by Geerts, and a full paper by Karl Gyllstrom. This year, there is a note co-authored by Anand Ramamoorthy from the University of Ghent. Steven Houben, an UHasselt alumnus (and one of my former Master’s thesis students) who is now working on a PhD in Jakob Bardram’s group, got a CHI 2012 full paper accepted too (congrats again, Steven!). Of course, there’s the question of what really constitutes a Belgian CHI paper. Is it enough if the paper is (co-)authored by researchers employed by a Belgian institution, or do the authors have to be Belgian? While Karl Gyllstrom and Anand Ramamoorthy are affiliated with Belgian universities, they are not Belgian citizens (as far as I can tell). On the other hand, while Steven is a Belgian citizen, he is not affiliated with a Belgian university or company.

This made me wonder if there were any other Belgians working abroad who ever co-authored papers at CHI. I could only think of Professor Pattie Maes (VUB alumna) who directs the Fluid interfaces group at MIT Media Lab (she currently has 4 CHI papers according to DBLP). I would love to hear about other people that I might have missed.

To conclude, there is certainly room for improvement, although we’re not doing that bad either. Let’s hope the HCI community in Belgium continues to grow and Belgium will eventually be as well represented at top HCI venues as our neighbouring countries.

blog@CACM

Apparantely, there’s a Communications of the ACM group blog now, called blog@CACM. There is also a blog roll that includes the blog of Daniel Lemire, which happens to be one of my favorite research blogs. Although Daniel works in a different subdiscipline of computer science, I enjoy reading his research advice and interesting viewpoints on the process of doing research.

The group blog features an interesting post by Tessa Lau, titled Three Misconceptions About Human-Computer Interaction, which raises a few interesting points. In my opinion, HCI is much more fundamental to creating interactive systems than people usually believe. In this context, I would like to refer to an interview with Patrick Baudisch that I recently read, in which he explains how he got started in HCI:

Doantam: How did you get started working on human-computer interaction?

Patrick: Without knowing it. I was a Ph.D. student in Darmstadt, Germany and worked on user interfaces for information filtering systems. A friend of mine saw my work and said “oh, I did not know you were in HCI, too”.

That was the first time I heard of that field.

AVI 2008

Although it’s a bit late (almost a month after the facts), I finally found some time to blog about Advanced Visual Interfaces (AVI) 2008 in Naples, where Jan presented our paper about Gummy.

Gummy title slide at AVI 2008

I liked it very much: the conference had good quality papers but was still reasonably small (around 150 attendants), and of course the weather and the Italian food were great We arrived on Tuesday which gave us some time to explore the city and take the ferry to Capri (a great suggestion by Robbie).

Piazza del Plebiscito

Vesuvius

Capri

I am not going to discuss the conference program into detail this time, but will just highlight a couple of interesting papers. Possibly one of the coolest papers was “Exploring Video Streams using Slit-Tear Visualizations” by Anthony Tang (video). Another presentation I enjoyed was “TapTap and MagStick: Improving One-Handed Target Acquisition on Small Touch-screens” by Anne Roudaut (video). It seems there is lots of related work in this area (e.g. Shift, ThumbSpace, etc.). Peter Brandl presented two interesting papers: Bridging the Gap between Real Printouts and Digital Whiteboards and Combining and Measuring the Benefits of Bimanual Pen and Direct-Touch Interaction on Horizontal Interfaces. He was brave enough to do an impressive live demo for the first paper Oh, and he also covered the conference in a blog post.

The first paper in our session, titled “A Mixed-Fidelity Prototyping Tool for Mobile Devices” by Marco de Sá, introduced a tool to easily design prototypes and evaluate them in real-life situations. The system was well thought out and serves a real need. I can imagine that we could use this kind of tool in a user-centered UI design course. The second paper in our session was “Model-based Layout Generation” by Sebastian Feuerstack. I already met Sebastian at CADUI 2006. They presented a generic layout model based on constraints. It reminded me a bit of the layout model Yves worked on for our EIS 2007 paper. They used the Cassowary constraint solver, which I also used for my MSc thesis on constraint-based layouts for UIML. Sebastian told me he got the idea from my demo at CADUI 2006. I forgot to add a certain constraint (the layout of the UI was thus underconstrained), which by coincidence had no effect on the user interface everytime I tested it. Of course, when I showed the demo it did have an effect This clearly illustrated that constraint solvers are sometimes unpredictable (see Past, present, and future of user interface software tools by Myers et al.). Sebastian’s solution to this problem was to hide the constraints from the designer and generate them automatically from a graphical layout model.

Juan Manuel Gonzalez Calleros — who I met at CADUI 2006, TAMODIA 2006 and a few other occasions — presented a poster and a paper at the workshop on haptics. He took a few pictures while Jan was presenting (thanks again Juan!). Here are Juan and Jan discussing UsiXML vs UIML

Jan and Juan discussing UsiXML vs UIML

Overall, the comments on our work were positive, although of course one of the biggest problems is still the lack of support for multi-screen interfaces. As Jan is actively hacking on Gummy these days, I don’t think it will take very long for this to be included in the tool

Back to the future: Smalltalk

I spent some time last weekend looking into Smalltalk again. The first time I did this was somewhere around 2004, when I played around with Ruby and discovered that it was strongly influenced by Smalltalk. Back then I watched an old video by Dan Ingalls on object-oriented programming which finally made me fully understand the essence of OOP: it’s all about messaging

[googlevideo:http://video.google.com/videoplay?docid=-2058469682761344178]

In my personal opinion, this video (or at least the message that Dan tries to communicate) should be better integrated in OOP courses at universities. Another invaluable resource for grasping these ideas is Design Principles Behind Smalltalk, again by Dan Ingalls. Of course, it’s difficult to understand what OOP is about if you have to learn it through a weak implementation. We learned the basics of OOP in C++ for example, which would be blasphemy to Alan Kay He once said Actually I made up the term object-oriented, and I can tell you I did not have C++ in mind. Here’s his definition of OOP:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

Before I looked into Smalltalk, to my understanding objects just contained a bunch of methods or functions that had access to the object’s context. I did not really grasp the idea that objects just respond to messages (or method calls in my definition). The real difference about this is that in Smalltalk messages are dynamically dispatched at runtime. A method is the function or subroutine that is invoked in response to the sending of a message, which will be matched to the message name (or selector) at runtime. In contrast, method calls in C++, Java and C# are statically bound at compile-time. There is thus a distinction between the semantics (or message) and implementation strategy (or method) in Smalltalk. Decoupling these allows for more flexibility, such as objects that cache all incoming messages until their database connection is fully set up, after which they replay these messages, or objects that forward messages to other objects (which might have even been passed in at runtime). This is of one of the aspects of extreme late binding in Alan Kay’s definition of OOP.

It’s exactly this run-time lookup of methods that enables effortless polymorphism. As explained in the video, at some point the intermediate factorial result will become an instance of LargeInteger, while in previous iterations it was an instance of SmallInteger. The multiplication message (*) is sent to this object, after which the correct method in the class LargeInteger is looked up for handling the message, allowing the existing code to continue to work. Java, C# and C++ have all inherited this feature (although C++ requires explicitly declaring methods as virtual for this to work, due to efficiency reasons). Smalltalk can even realize polymorphism without inheritance (also known as duck typing), although this is not shown in this video. Smalltalk has implicit interfaces: an object’s interface is the messages it responds to. If two objects both respond to a certain message, they are interchangeable (even at runtime). Traditional languages such as Java or C++ only support inheritance-based polymorphism (although something similar to duck typing can be achieved with C++ templates). Here’s the explanation by Dan Ingalls:

Polymorphism: A program should specify only the behavior of objects, not their representation.

A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it responds to integer protocol. Such generic description is crucial to models of the real world. Consider an automobile traffic simulation. Many procedures in such a system will refer to the various vehicles involved. Suppose one wished to add, say, a street sweeper. Substantial amounts of computation (in the form of recompiling) and possible errors would be involved in making this simple extension if the code depended on the objects it manipulates. The message interface establishes an ideal framework for such an extension. Provided that street sweepers support the same protocol as all other vehicles, no changes are needed to include them in the simulation:

More details on the differences between Smalltalk and current OOP languages are explained in Smalltalk: Getting The Message. I believe that understanding the original philosophy behind OOP helps you be a better object-oriented programmer in any language. Ramon Leon discusses the common mistake of magic objects which is an interesting read.

But let’s get to the point of why I started looking into Smalltalk again. At the moment, I mostly program in C# (and sometimes in Java), but I often feel frustrated with both languages. After being exposed to Ruby and Python, I feel like static typing requires me to write too much code and helps the compiler more than it helps me. Furthermore, Java seems to be overly engineered with all the factories, manager, readers and writers, while C# is often inconsistent or lacking in its implementation (e.g. anonymous methods are not really closures). Both languages are becoming increasingly complex with the addition of more and more features. Generics for example is just not necessary in a dynamically typed language. The problem with scripting languages such as Ruby and Python however, is that they are often interpreted and slow. I experimented a bit with JRuby (a Ruby implementation in Java with full access to Java’s class library) but that didn’t satisfy my needs either. After trying to code a simple Hello World Swing application in JRuby, I was stunned that it still required me to wrap code inside an ActionListener like Java does, while I really just wanted to pass in a Ruby block.

Update:: Nick Sieger pointed out that a newer version of JRuby does allow blocks to be passed in.

Other people have also been struggling with languages such as Java or C# (e.g. Jamie Zawinski, Mark Miller and Steve Yegge) or are looking for alternatives (e.g. Martin Fowler and Tim Bray). I think the popularity of Ruby might motivate more people to have a look at Smalltalk. Furthermore, if you know Ruby, it’s easier to get acquainted with Smalltalk. Besides lots of similarities in the class library (the Kernel class, the times message on numbers, etc.), Ruby already introduces the notion that everything is an object, objects in Ruby communicate through messages and Ruby has blocks. However, Ruby is not really equivalent to Smalltalk yet. Ruby introduced extra syntax to be more familiar to people that were used to C-style programming languages, thereby losing part of Smalltalk’s flexibility. In fact, the beauty of Smalltalk is that its entire syntax easily fits on a postcard. If you look closely at this example, even a conditional test in Smalltalk is implemented using messaging on objects. You just send the message ifFalse to an instance of the class Boolean, and pass in a code block you want to have executed when the value is false. It’s turtles all the way down.

Another problem I came across when developing in Java or C# (or in any other OOP language I used) was the difficulty of changing class hierarchies. Very often, due to time constraints, a design is just left in its original state, and the new requirements are supported by performing a quick hack. I suspect this problem is especially prevalent in so-called “research code” It gets even worse when programming in teams. Although this problem is generally known in software engineering and several strategies have been proposed to deal with it, I wondered why the promise of OOP failed here. Wasn’t OOP supposed to improve the situation and make spaghetti code obsolete?

Jeffrey Massung asked himself a similar question: What if the philosophy (OOP) wasn’t the problem, but the implementation (language) was?, and decided to write a 2D DirectX game in Smalltalk. It seems Smalltalk did indeed allow for easier design changes. Self (a language derived from Smalltalk) tries to alleviate the aforementioned problem by specializing through cloning of existing objects instead of through class hierarchies. It’s funny to note that the problem wasn’t that bad in Smalltalk, since you could still easily change the hierarchy, unlike in languages such as Java or C++.

The real power of Smalltalk is not its syntax, but the entire environment. I believe this is also key to understanding OOP. The current languages and tools (e.g. IDEs) we use for doing object-oriented programming are just weak implementations of the original Smalltalk environment. When working in Smalltalk, you are working in a world of running objects, there are no files or applications, everything is an object. For example, version control systems in Smalltalk are actually aware of the semantics of your code, they are not just text-based. When merging code they can show you what methods have been changed, added or removed, what classes were changed, allow you to decide which changes you want to keep, etc.. Although I think Bazaar is a great, it doesn’t come close to this way of working. Smalltalk allows live debugging and code changes, which is tremendeously useful. Ever wished that you could fix a problem while you’re debugging and immediately check if your solution works without having to recompile your application and start the entire process again? In Smalltalk (and Lisp) that’s possible. If you want to find out more about why Smalltalk is way ahead of current mainstream OOP languages, have a look at Ramon Leon’s Why Smalltalk.

Update:: Scott Lewis commented that I should have emphasized that Smalltalk is mostly written in Smalltalk: So when you subclass any object, you can go back up the chain of inherited objects and see how everything works. Likewise when you hit an error/bug, the debugger lets you delve about as deeply as you could possibly want into what is going wrong, and why it is an error. This is indeed a powerful aspect of Smalltalk, and an example of how it was influenced by Lisp.

Besides reading about Smalltalk, I have also been experimenting a bit with Squeak. Squeak is an open source implementation of the Smalltalk programming language and environment, created by its original designers. Squeak runs bit-identical on many platforms (including Windows CE/PocketPC). I will leave my Squeak experiments for another blog post though

To conclude, it seems that we are very good at ignoring the past. We just take our current systems for granted, and use them as a reference frame for future innovations. Marshall McLuhan once phrased it like this: We drive into the future using only our rearview mirror. I believe this is true in HCI research as well, as people like Dan Olsen have pointed out. He argued that our existing system models are barriers to the inclusion of many of the interactive techniques that have been developed. He gave the example of the recent surge in vision-based systems and multi-touch input devices, which get forced in a standard mouse point model because that is all that our systems support:

Multiple input points and multiple users are all discarded when compressing everything into the mouse/keyboard input model. Lots of good research into input techniques will never be deployed until better systems models are created to unify these techniques for application developers.

Research on toolkits is a lot less popular these days. We try to map everything into existing models, and always feel like we have to support legacy applications, which hampers significant progress. Bill Buxton has also studied innovation in HCI, and questioned the progress we made in the last 20 years.

I think the reason why so many great work was done by the early researchers in our field (e.g. Ivan Sutherland, Douglas Engelbart and Alan Kay) is — besides that they were very creative and intelligent people — that there was not that much previous work, they just had to start from scratch. Alan Kay once asked Ivan Sutherland how it was possible that he had invented computer graphics, done the first object oriented software system and the first real time constraint solver all by himself in one year, after which Sutherland responded I didn’t know it was hard.

How to give a great research talk by MSR

Lode recently blogged about a seminar by Microsoft Research on how to give a great research talk, starring John Krumm, Patrick Baudisch, Rick Szeliski and Mary Czerwinski.

Some other resources I recommend are “How to give a good research talk” by Simon Peyton Jones, and the Presentation Zen blog. These should already provide you with the basics for giving a good (research) talk. Here is what I personally found useful in the Microsoft Research session:

  • Use animations sparingly: animations are only useful to illustrate a process in your system, or make something more clear to the audience. Don’t overdo it. In my opinion, I offended against this rule with my EIS 2007 presentation. Some animations were useful, but a lot of them were unnecessary. When I gave part of this presentation to a few other researchers some time after the conference, one of them commented that I should contact George Lucas about the effects and transitions
  • Use pictures for related work: Patrick argued that a lot of people remember pictures from papers they read, so using a visual representation of the related work is more useful than a list of references.
  • Try to demo the current status of your future work: Rick showed the future work demo of their photo tourism paper he gave during his talk at SIGGRAPH. This way you give the audience evidence that you’re actively improving upon your work.
  • Tactics to handle rude questions: Mary gave a few tips for dealing with rude questions such as repeating the question that was posed. This is always useful to indicate how you have understood it. Furthermore, it gives people in the audience a second chance if they did not understand the person who posed the question.

All in all an interesting seminar, might be useful to organize something similar at our institute in the future. Thanks to Lode for sharing the link on his blog.