Back to the future: Smalltalk

I spent some time last weekend looking into Smalltalk again. The first time I did this was somewhere around 2004, when I played around with Ruby and discovered that it was strongly influenced by Smalltalk. Back then I watched an old video by Dan Ingalls on object-oriented programming which finally made me fully understand the essence of OOP: it’s all about messaging

[googlevideo:http://video.google.com/videoplay?docid=-2058469682761344178]

In my personal opinion, this video (or at least the message that Dan tries to communicate) should be better integrated in OOP courses at universities. Another invaluable resource for grasping these ideas is Design Principles Behind Smalltalk, again by Dan Ingalls. Of course, it’s difficult to understand what OOP is about if you have to learn it through a weak implementation. We learned the basics of OOP in C++ for example, which would be blasphemy to Alan Kay He once said Actually I made up the term object-oriented, and I can tell you I did not have C++ in mind. Here’s his definition of OOP:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

Before I looked into Smalltalk, to my understanding objects just contained a bunch of methods or functions that had access to the object’s context. I did not really grasp the idea that objects just respond to messages (or method calls in my definition). The real difference about this is that in Smalltalk messages are dynamically dispatched at runtime. A method is the function or subroutine that is invoked in response to the sending of a message, which will be matched to the message name (or selector) at runtime. In contrast, method calls in C++, Java and C# are statically bound at compile-time. There is thus a distinction between the semantics (or message) and implementation strategy (or method) in Smalltalk. Decoupling these allows for more flexibility, such as objects that cache all incoming messages until their database connection is fully set up, after which they replay these messages, or objects that forward messages to other objects (which might have even been passed in at runtime). This is of one of the aspects of extreme late binding in Alan Kay’s definition of OOP.

It’s exactly this run-time lookup of methods that enables effortless polymorphism. As explained in the video, at some point the intermediate factorial result will become an instance of LargeInteger, while in previous iterations it was an instance of SmallInteger. The multiplication message (*) is sent to this object, after which the correct method in the class LargeInteger is looked up for handling the message, allowing the existing code to continue to work. Java, C# and C++ have all inherited this feature (although C++ requires explicitly declaring methods as virtual for this to work, due to efficiency reasons). Smalltalk can even realize polymorphism without inheritance (also known as duck typing), although this is not shown in this video. Smalltalk has implicit interfaces: an object’s interface is the messages it responds to. If two objects both respond to a certain message, they are interchangeable (even at runtime). Traditional languages such as Java or C++ only support inheritance-based polymorphism (although something similar to duck typing can be achieved with C++ templates). Here’s the explanation by Dan Ingalls:

Polymorphism: A program should specify only the behavior of objects, not their representation.

A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it responds to integer protocol. Such generic description is crucial to models of the real world. Consider an automobile traffic simulation. Many procedures in such a system will refer to the various vehicles involved. Suppose one wished to add, say, a street sweeper. Substantial amounts of computation (in the form of recompiling) and possible errors would be involved in making this simple extension if the code depended on the objects it manipulates. The message interface establishes an ideal framework for such an extension. Provided that street sweepers support the same protocol as all other vehicles, no changes are needed to include them in the simulation:

More details on the differences between Smalltalk and current OOP languages are explained in Smalltalk: Getting The Message. I believe that understanding the original philosophy behind OOP helps you be a better object-oriented programmer in any language. Ramon Leon discusses the common mistake of magic objects which is an interesting read.

But let’s get to the point of why I started looking into Smalltalk again. At the moment, I mostly program in C# (and sometimes in Java), but I often feel frustrated with both languages. After being exposed to Ruby and Python, I feel like static typing requires me to write too much code and helps the compiler more than it helps me. Furthermore, Java seems to be overly engineered with all the factories, manager, readers and writers, while C# is often inconsistent or lacking in its implementation (e.g. anonymous methods are not really closures). Both languages are becoming increasingly complex with the addition of more and more features. Generics for example is just not necessary in a dynamically typed language. The problem with scripting languages such as Ruby and Python however, is that they are often interpreted and slow. I experimented a bit with JRuby (a Ruby implementation in Java with full access to Java’s class library) but that didn’t satisfy my needs either. After trying to code a simple Hello World Swing application in JRuby, I was stunned that it still required me to wrap code inside an ActionListener like Java does, while I really just wanted to pass in a Ruby block.

Update:: Nick Sieger pointed out that a newer version of JRuby does allow blocks to be passed in.

Other people have also been struggling with languages such as Java or C# (e.g. Jamie Zawinski, Mark Miller and Steve Yegge) or are looking for alternatives (e.g. Martin Fowler and Tim Bray). I think the popularity of Ruby might motivate more people to have a look at Smalltalk. Furthermore, if you know Ruby, it’s easier to get acquainted with Smalltalk. Besides lots of similarities in the class library (the Kernel class, the times message on numbers, etc.), Ruby already introduces the notion that everything is an object, objects in Ruby communicate through messages and Ruby has blocks. However, Ruby is not really equivalent to Smalltalk yet. Ruby introduced extra syntax to be more familiar to people that were used to C-style programming languages, thereby losing part of Smalltalk’s flexibility. In fact, the beauty of Smalltalk is that its entire syntax easily fits on a postcard. If you look closely at this example, even a conditional test in Smalltalk is implemented using messaging on objects. You just send the message ifFalse to an instance of the class Boolean, and pass in a code block you want to have executed when the value is false. It’s turtles all the way down.

Another problem I came across when developing in Java or C# (or in any other OOP language I used) was the difficulty of changing class hierarchies. Very often, due to time constraints, a design is just left in its original state, and the new requirements are supported by performing a quick hack. I suspect this problem is especially prevalent in so-called “research code” It gets even worse when programming in teams. Although this problem is generally known in software engineering and several strategies have been proposed to deal with it, I wondered why the promise of OOP failed here. Wasn’t OOP supposed to improve the situation and make spaghetti code obsolete?

Jeffrey Massung asked himself a similar question: What if the philosophy (OOP) wasn’t the problem, but the implementation (language) was?, and decided to write a 2D DirectX game in Smalltalk. It seems Smalltalk did indeed allow for easier design changes. Self (a language derived from Smalltalk) tries to alleviate the aforementioned problem by specializing through cloning of existing objects instead of through class hierarchies. It’s funny to note that the problem wasn’t that bad in Smalltalk, since you could still easily change the hierarchy, unlike in languages such as Java or C++.

The real power of Smalltalk is not its syntax, but the entire environment. I believe this is also key to understanding OOP. The current languages and tools (e.g. IDEs) we use for doing object-oriented programming are just weak implementations of the original Smalltalk environment. When working in Smalltalk, you are working in a world of running objects, there are no files or applications, everything is an object. For example, version control systems in Smalltalk are actually aware of the semantics of your code, they are not just text-based. When merging code they can show you what methods have been changed, added or removed, what classes were changed, allow you to decide which changes you want to keep, etc.. Although I think Bazaar is a great, it doesn’t come close to this way of working. Smalltalk allows live debugging and code changes, which is tremendeously useful. Ever wished that you could fix a problem while you’re debugging and immediately check if your solution works without having to recompile your application and start the entire process again? In Smalltalk (and Lisp) that’s possible. If you want to find out more about why Smalltalk is way ahead of current mainstream OOP languages, have a look at Ramon Leon’s Why Smalltalk.

Update:: Scott Lewis commented that I should have emphasized that Smalltalk is mostly written in Smalltalk: So when you subclass any object, you can go back up the chain of inherited objects and see how everything works. Likewise when you hit an error/bug, the debugger lets you delve about as deeply as you could possibly want into what is going wrong, and why it is an error. This is indeed a powerful aspect of Smalltalk, and an example of how it was influenced by Lisp.

Besides reading about Smalltalk, I have also been experimenting a bit with Squeak. Squeak is an open source implementation of the Smalltalk programming language and environment, created by its original designers. Squeak runs bit-identical on many platforms (including Windows CE/PocketPC). I will leave my Squeak experiments for another blog post though

To conclude, it seems that we are very good at ignoring the past. We just take our current systems for granted, and use them as a reference frame for future innovations. Marshall McLuhan once phrased it like this: We drive into the future using only our rearview mirror. I believe this is true in HCI research as well, as people like Dan Olsen have pointed out. He argued that our existing system models are barriers to the inclusion of many of the interactive techniques that have been developed. He gave the example of the recent surge in vision-based systems and multi-touch input devices, which get forced in a standard mouse point model because that is all that our systems support:

Multiple input points and multiple users are all discarded when compressing everything into the mouse/keyboard input model. Lots of good research into input techniques will never be deployed until better systems models are created to unify these techniques for application developers.

Research on toolkits is a lot less popular these days. We try to map everything into existing models, and always feel like we have to support legacy applications, which hampers significant progress. Bill Buxton has also studied innovation in HCI, and questioned the progress we made in the last 20 years.

I think the reason why so many great work was done by the early researchers in our field (e.g. Ivan Sutherland, Douglas Engelbart and Alan Kay) is — besides that they were very creative and intelligent people — that there was not that much previous work, they just had to start from scratch. Alan Kay once asked Ivan Sutherland how it was possible that he had invented computer graphics, done the first object oriented software system and the first real time constraint solver all by himself in one year, after which Sutherland responded I didn’t know it was hard.

OneNote: a hidden Microsoft Office gem

Last week I discovered Microsoft OneNote 2007, and I am (honestly) impressed. Actually, the first time I ever heard of OneNote was when I read the FAQ of InkSeine.

OneNote 2007

Here’s part of the product description:

Office OneNote 2007 is a digital notebook that provides people one place to gather their notes and information, powerful search to find what they are looking for quickly, and easy-to-use shared notebooks so that they can manage information overload and work together more effectively.

I have been using a combination of Gmail, Google Calendar, Google Docs, Google Notebook and del.icio.us to organize (and capture) information (partially inspired by this setup). The big problem here was synchronization. I ended up copying URLs to Google Notebook since I would never be confronted with them again if I stored them in del.icio.us. There were no explicit links between meetings, documents, and other resources (e.g. websites or short notes in notebooks). I would add gadgets for each of these apps to my iGoogle page to keep an overview. Although I could cope with this setup, it was not ideal. Gmail and Google Calendar are great services which I still love to use, but for quick notes and jotting down ideas I often resorted to paper notes.

Although OneNote is not perfect either, combined with a laptop (or tablet PC) it has the potential to eliminate most paper note taking. To get a good overview of what’s possible with OneNote, have a look at these resources:

A lot of people have been impressed with OneNote and have blogged about it.

Here are a few of OneNote’s features that I like:

  • text search in images and audio
  • audio and video recording with synchronized notes
  • shared notebooks
  • embedding any file as a printout
  • screen clippings
  • the ability to write and draw anywhere on a page
  • tags (e.g. todo, important, question, etc.)
  • calculator support
  • inking support (if only I had a tablet PC)

Here’s an example of how I used OneNote to summarize an intuitive explanation of Bayesian Reasoning by Eliezer Yudkowsky:

Intuitive Explanation of Bayesian Reasoning in OneNote

Gummy UI improvements

I am currently working together with Jan on improving the Gummy tool (website still under construction). We have come a long way since Jan wrote the first version for his Master’s thesis. I figured it might be interesting to share a few screenshots of different phases in the development:

Here’s the first version (June 2007):

Gumme screenshot of June 2007

A version with roughly the same UI but lots of architectural improvements (November 2007):

Gummy around the end of 2007

The current version with an improved UI (March 2008):

Gummy screenshot of March 2008

I think the most significant improvement is the new toolbox. Designers can now more easily distinguish between different widgets. In previous versions, some widgets were hard to distinguish. We added labels to each widget in the toolbox, and improved the inline rendering of the widgets. Notice that these images are still not predefined icons, but real widgets rendered to a bitmap. However, in the current version we chose to render them at their optimal size and then scale the images down.

Rendering widgets to bitmaps also forced us to migrate to Windows. We started our development on GNU/Linux using Mono but had to switch due to an annoying bug in Mono’s implementation of Control.DrawToBitmap. Future versions should be able to run in Mono again when this bug is fixed though.

Update: Jan sent me a screenshot of an even older version of Gummy, dated back to December 19, 2006:

Gummy screenshot of December 19, 2006

Job interviews and the attraction of excellence

Steve Yegge (known amongst others for porting Ruby on Rails to JavaScript) has a great article on job interviews. Although these guidelines are targeted specifically towards landing a job at Google, they are probably applicable for most technology companies. Certainly an interesting read …

Steve used to work for Amazon (he is at Google now), and has blogged before about Google’s recruiting strategy. He wrote about the importance of mouth-to-mouth advertising (“you can’t fake being cool”) and how Google has turned the whole recruitment process around:

The term “recruiting” implies that you’re going out and looking for people, and trying to convince them to come work for you. Google has managed to turn the process around. Smart people now make the pilgrimage to Google, and Google spends the bulk of their time turning great people away.

There have been many stories on Google’s unique mix between the atmosphere of graduate school and a startup, although there was also a more neutral testimony from a former employee who now works at Microsoft.

In summary, it seems that one of the most important factors in being able to hire smart people is “excellence”: being able to work in a stimulating environment with smart people who are good at what they do. In my opinion this is true for research and academia as well. I guess most researchers wouldn’t turn down offers from the major labs that publish at the main conferences in their field year after year. Well, at least if it’s feasible to work there, e.g. moving to another country might be a problem

As an example from the academic world, have a look at this promotional video for Microsoft Research:

At about 2 minutes and 30 seconds, Bill Buxton mentions that at MSR Cambridge he got to have lunch with his “hero”, Turing-award winner Tony Hoare (the inventor of QuickSort), and goes on to say that “the history of Computer science is walking down the corridors”. The movie ends with a listing of all awards and honors MSR’s employees have received.

Full paper on Gummy accepted at AVI 2008

Our hard work before the holidays has paid off We just heard that our full paper submission for AVI 2008 has been accepted.

Gummy

Jan Meskens, Jo Vermeulen, Kris Luyten and Karin Coninx. Gummy for Multi-Platform User Interface Designs: Shape me, Multiply me, Fix me, Use me. To appear in Proceedings of AVI ’08, the working conference on Advanced visual interfaces, Napoli, Italy, May 28-30, 2008.

In this paper we introduce a multi-platform user interface design approach, and Gummy, a design tool to support that approach. This work originated out of Jan Meskens’ Master’s thesis, in which he created a UIML GUI builder. While there are several tools for developing multi-platform user interfaces, these have a number of problems: (1) the resulting user interfaces often lack the aesthetic quality of manually designed interfaces; (2) the tools are not intuitive since designers have to deal with abstractions and do not directly manipulate the user interface design; and (3) designers can not accurately predict what the resulting user interface will look like. Our goal was to allow designers to reuse their skills of existing user interface design tools (such as GUI builders) as much as possible and try to maintain a high level of fidelity (unlike sketch-based design tools).

Gummy design process

We also had a short paper/poster about Gummy accepted to CHI 2008 Work-in-Progress. In this paper we explain how the tool can be used to involve domain experts in the user interface design process.

Gummy domain expert workspace

Kris Luyten, Jan Meskens, Jo Vermeulen and Karin Coninx. Meta-GUI-Builders: Generating Domain-specific Interface Builders for Multi-Device User Interface Creation. To appear in CHI ’08 extended abstracts on Human factors in computing systems, Florence, Italy, April 5-10, 2008.

We received lots of input on the prototypes and early drafts of the papers, so thanks to everyone at our lab who contributed in one way or another Additional thanks go to Karel Robert for creating the Gummy logo (have a look at his portfolio).

More information about the papers can be found at my publications page.