Tag: research

Job interviews and the attraction of excellence

Steve Yegge (known amongst others for porting Ruby on Rails to JavaScript) has a great article on job interviews. Although these guidelines are targeted specifically towards landing a job at Google, they are probably applicable for most technology companies. Certainly an interesting read …

Steve used to work for Amazon (he is at Google now), and has blogged before about Google’s recruiting strategy. He wrote about the importance of mouth-to-mouth advertising (“you can’t fake being cool”) and how Google has turned the whole recruitment process around:

The term “recruiting” implies that you’re going out and looking for people, and trying to convince them to come work for you. Google has managed to turn the process around. Smart people now make the pilgrimage to Google, and Google spends the bulk of their time turning great people away.

There have been many stories on Google’s unique mix between the atmosphere of graduate school and a startup, although there was also a more neutral testimony from a former employee who now works at Microsoft.

In summary, it seems that one of the most important factors in being able to hire smart people is “excellence”: being able to work in a stimulating environment with smart people who are good at what they do. In my opinion this is true for research and academia as well. I guess most researchers wouldn’t turn down offers from the major labs that publish at the main conferences in their field year after year. Well, at least if it’s feasible to work there, e.g. moving to another country might be a problem

As an example from the academic world, have a look at this promotional video for Microsoft Research:

At about 2 minutes and 30 seconds, Bill Buxton mentions that at MSR Cambridge he got to have lunch with his “hero”, Turing-award winner Tony Hoare (the inventor of QuickSort), and goes on to say that “the history of Computer science is walking down the corridors”. The movie ends with a listing of all awards and honors MSR’s employees have received.

Full paper on Gummy accepted at AVI 2008

Our hard work before the holidays has paid off We just heard that our full paper submission for AVI 2008 has been accepted.

Gummy

Jan Meskens, Jo Vermeulen, Kris Luyten and Karin Coninx. Gummy for Multi-Platform User Interface Designs: Shape me, Multiply me, Fix me, Use me. To appear in Proceedings of AVI ’08, the working conference on Advanced visual interfaces, Napoli, Italy, May 28-30, 2008.

In this paper we introduce a multi-platform user interface design approach, and Gummy, a design tool to support that approach. This work originated out of Jan Meskens’ Master’s thesis, in which he created a UIML GUI builder. While there are several tools for developing multi-platform user interfaces, these have a number of problems: (1) the resulting user interfaces often lack the aesthetic quality of manually designed interfaces; (2) the tools are not intuitive since designers have to deal with abstractions and do not directly manipulate the user interface design; and (3) designers can not accurately predict what the resulting user interface will look like. Our goal was to allow designers to reuse their skills of existing user interface design tools (such as GUI builders) as much as possible and try to maintain a high level of fidelity (unlike sketch-based design tools).

Gummy design process

We also had a short paper/poster about Gummy accepted to CHI 2008 Work-in-Progress. In this paper we explain how the tool can be used to involve domain experts in the user interface design process.

Gummy domain expert workspace

Kris Luyten, Jan Meskens, Jo Vermeulen and Karin Coninx. Meta-GUI-Builders: Generating Domain-specific Interface Builders for Multi-Device User Interface Creation. To appear in CHI ’08 extended abstracts on Human factors in computing systems, Florence, Italy, April 5-10, 2008.

We received lots of input on the prototypes and early drafts of the papers, so thanks to everyone at our lab who contributed in one way or another Additional thanks go to Karel Robert for creating the Gummy logo (have a look at his portfolio).

More information about the papers can be found at my publications page.

How to give a great research talk by MSR

Lode recently blogged about a seminar by Microsoft Research on how to give a great research talk, starring John Krumm, Patrick Baudisch, Rick Szeliski and Mary Czerwinski.

Some other resources I recommend are “How to give a good research talk” by Simon Peyton Jones, and the Presentation Zen blog. These should already provide you with the basics for giving a good (research) talk. Here is what I personally found useful in the Microsoft Research session:

  • Use animations sparingly: animations are only useful to illustrate a process in your system, or make something more clear to the audience. Don’t overdo it. In my opinion, I offended against this rule with my EIS 2007 presentation. Some animations were useful, but a lot of them were unnecessary. When I gave part of this presentation to a few other researchers some time after the conference, one of them commented that I should contact George Lucas about the effects and transitions
  • Use pictures for related work: Patrick argued that a lot of people remember pictures from papers they read, so using a visual representation of the related work is more useful than a list of references.
  • Try to demo the current status of your future work: Rick showed the future work demo of their photo tourism paper he gave during his talk at SIGGRAPH. This way you give the audience evidence that you’re actively improving upon your work.
  • Tactics to handle rude questions: Mary gave a few tips for dealing with rude questions such as repeating the question that was posed. This is always useful to indicate how you have understood it. Furthermore, it gives people in the audience a second chance if they did not understand the person who posed the question.

All in all an interesting seminar, might be useful to organize something similar at our institute in the future. Thanks to Lode for sharing the link on his blog.

Anniversary lecture by Gerard ‘t Hooft @UHasselt

On Wednesday I went to one of our university’s anniversary lectures (celebrating its 35-year existence) by Professor Gerard ‘t Hooft. Professor ‘t Hooft is a theoretical physicist who received the Nobel Prize in Physics for “elucidating the quantum structure of electroweak interactions in physics”.

The lecture was very entertaining and interesting. He started with the physics of very small, elementary particles (and how much smaller we can go) which he later linked to the physics underlying very large objects and the universe. He used fractals (more specifically the Mandelbrot set) as an analogy for this idea (self-similarity under magnification).

There was a brief discussion of the Large Hadron Collider (LHC) at CERN, a particle accelerator that will likely result in the discovery of the Higgs boson. Here is an annotated picture of the LHC’s underground tunnel (with a perimeter of 26 km):

Large Hadron Collider

‘t Hooft also discussed string theory, which says that the building blocks of our universe are one-dimensional extended objects called strings, rather than zero-dimensional point particles. Here is String Ducky, a prize winning video explaining string theory in two minutes:

Finally, he discussed the uncertainties physicists are currently dealing with, including the fact that there might be many dimensions in our universe (as string theory indicates). A good explanation of this is given in this video (just ignore the spiritual ponderings in the subtitles):

http://www.youtube.com/watch?v=yzMEAkI-yrQ

Having recently read the book “Surely You’re Joking, Mr. Feynman!”, I recognized a few of the characters who featured in Feynman’s stories during Professor ‘t Hooft’s talk. One of them was Murray Gell-Man of whom I found an interesting talk on beauty and truth in physics at TED last year:

Since I have always been interested in physics, I really enjoyed this talk. It also made me very humble as I realized that our field of research is of an entirely different nature than theoretical physics

I am looking forward to another interesting anniversary talk by Ingrid Daubechies in May. She is a full professor at Princeton and is mainly known for her work on wavelets in image compression. Apparently, her roots lie in the town where I currently live.

SmartKom

At Ubicomp 2007, there was a book stand by Springer just outside the conference room. On the last day, the volunteer behind the stand told me that I could choose one of the books that were still lying there. I didn’t see anything interesting at first. Since a few people at our institute are working on multimodal systems, I picked the book SmartKom: Foundations of Multimodal Dialogue Systems.

SmartKom book

During the holidays, I read the first part of the book and noticed the book was relevant for me after all. SmartKom was a large four-year project about multimodal dialogue systems. They developed a system that provides symmetric multimodality in a mixed-initiative dialogue system with an embodied conversational agent. There is also a follow-up project that should ends in 2007: SmartWeb. SmartWeb goes beyond SmartKom in supporting open-domain question answering using the entire (Semantic) Web as its knowledge base.

Symmetric multimodality means that every input mode (e.g. speech, gesture, facial expression) is also available for output, and vice versa. Multimodal interaction is one way to make interaction between humans and computers more intuitive. Human dialogue is not only based on speech but also on nonverbal communication such as gesture, gaze, facial expression, and body posture. One of the major characteristics of human-human interaction is the coordinated use of different modalities (e.g. allowing all modalities to refer to or depend upon each other). Symmetric multimodality combined with a mixed-initiative conversational agent results in more intuitive interaction. The SmartKom systems reduces recognition errors by modality fusion. By considering multiple input modalities together (e.g. speech, facial expression and gesture), the system can more correctly estimate the user’s intention.

SmartKom has been used in several application scenarios: in public telephone booths, home entertainment systems, mobile systems and in a car environment. The last part of the book discusses techniques to evaluate multimodal dialogue systems, which should be an interesting read.