Category: Uncategorized

How to give a great research talk by MSR

Lode recently blogged about a seminar by Microsoft Research on how to give a great research talk, starring John Krumm, Patrick Baudisch, Rick Szeliski and Mary Czerwinski.

Some other resources I recommend are “How to give a good research talk” by Simon Peyton Jones, and the Presentation Zen blog. These should already provide you with the basics for giving a good (research) talk. Here is what I personally found useful in the Microsoft Research session:

  • Use animations sparingly: animations are only useful to illustrate a process in your system, or make something more clear to the audience. Don’t overdo it. In my opinion, I offended against this rule with my EIS 2007 presentation. Some animations were useful, but a lot of them were unnecessary. When I gave part of this presentation to a few other researchers some time after the conference, one of them commented that I should contact George Lucas about the effects and transitions
  • Use pictures for related work: Patrick argued that a lot of people remember pictures from papers they read, so using a visual representation of the related work is more useful than a list of references.
  • Try to demo the current status of your future work: Rick showed the future work demo of their photo tourism paper he gave during his talk at SIGGRAPH. This way you give the audience evidence that you’re actively improving upon your work.
  • Tactics to handle rude questions: Mary gave a few tips for dealing with rude questions such as repeating the question that was posed. This is always useful to indicate how you have understood it. Furthermore, it gives people in the audience a second chance if they did not understand the person who posed the question.

All in all an interesting seminar, might be useful to organize something similar at our institute in the future. Thanks to Lode for sharing the link on his blog.

Anniversary lecture by Gerard ‘t Hooft @UHasselt

On Wednesday I went to one of our university’s anniversary lectures (celebrating its 35-year existence) by Professor Gerard ‘t Hooft. Professor ‘t Hooft is a theoretical physicist who received the Nobel Prize in Physics for “elucidating the quantum structure of electroweak interactions in physics”.

The lecture was very entertaining and interesting. He started with the physics of very small, elementary particles (and how much smaller we can go) which he later linked to the physics underlying very large objects and the universe. He used fractals (more specifically the Mandelbrot set) as an analogy for this idea (self-similarity under magnification).

There was a brief discussion of the Large Hadron Collider (LHC) at CERN, a particle accelerator that will likely result in the discovery of the Higgs boson. Here is an annotated picture of the LHC’s underground tunnel (with a perimeter of 26 km):

Large Hadron Collider

‘t Hooft also discussed string theory, which says that the building blocks of our universe are one-dimensional extended objects called strings, rather than zero-dimensional point particles. Here is String Ducky, a prize winning video explaining string theory in two minutes:

Finally, he discussed the uncertainties physicists are currently dealing with, including the fact that there might be many dimensions in our universe (as string theory indicates). A good explanation of this is given in this video (just ignore the spiritual ponderings in the subtitles):

http://www.youtube.com/watch?v=yzMEAkI-yrQ

Having recently read the book “Surely You’re Joking, Mr. Feynman!”, I recognized a few of the characters who featured in Feynman’s stories during Professor ‘t Hooft’s talk. One of them was Murray Gell-Man of whom I found an interesting talk on beauty and truth in physics at TED last year:

Since I have always been interested in physics, I really enjoyed this talk. It also made me very humble as I realized that our field of research is of an entirely different nature than theoretical physics

I am looking forward to another interesting anniversary talk by Ingrid Daubechies in May. She is a full professor at Princeton and is mainly known for her work on wavelets in image compression. Apparently, her roots lie in the town where I currently live.

SmartKom

At Ubicomp 2007, there was a book stand by Springer just outside the conference room. On the last day, the volunteer behind the stand told me that I could choose one of the books that were still lying there. I didn’t see anything interesting at first. Since a few people at our institute are working on multimodal systems, I picked the book SmartKom: Foundations of Multimodal Dialogue Systems.

SmartKom book

During the holidays, I read the first part of the book and noticed the book was relevant for me after all. SmartKom was a large four-year project about multimodal dialogue systems. They developed a system that provides symmetric multimodality in a mixed-initiative dialogue system with an embodied conversational agent. There is also a follow-up project that should ends in 2007: SmartWeb. SmartWeb goes beyond SmartKom in supporting open-domain question answering using the entire (Semantic) Web as its knowledge base.

Symmetric multimodality means that every input mode (e.g. speech, gesture, facial expression) is also available for output, and vice versa. Multimodal interaction is one way to make interaction between humans and computers more intuitive. Human dialogue is not only based on speech but also on nonverbal communication such as gesture, gaze, facial expression, and body posture. One of the major characteristics of human-human interaction is the coordinated use of different modalities (e.g. allowing all modalities to refer to or depend upon each other). Symmetric multimodality combined with a mixed-initiative conversational agent results in more intuitive interaction. The SmartKom systems reduces recognition errors by modality fusion. By considering multiple input modalities together (e.g. speech, facial expression and gesture), the system can more correctly estimate the user’s intention.

SmartKom has been used in several application scenarios: in public telephone booths, home entertainment systems, mobile systems and in a car environment. The last part of the book discusses techniques to evaluate multimodal dialogue systems, which should be an interesting read.

Re: 3 to see

Apparently, Lode threw me a stick. So here is my contribution

Some hackers got Linux running on the Nintendo Wii:

http://www.youtube.com/watch?v=H5YB1Mmx7E4

The new Apple Macbook Air features a multi-touch trackpad, which you can see in action here:

And finally, a funny video about Bill Gates’ last day at Microsoft starring amongst others Matthew McConaughey, Jay-Z, Bono, Steven Spielberg, George Clooney, Hillary Clinton and Al Gore:

Unfortunately, I ran out of bloggers to pass this on to. Maybe it’s time to push a few people in my office to start a blog as well


Update: I might as well pass the stick to Takis, as it seems he has picked up blogging again

Reality-Based Interaction

Kris pointed me to an interesting CHI 2008 paper: Reality-Based Interaction: A Framework for Post-WIMP Interfaces by R.J.K. Jacob, A. Girouard, L.M. Hirshfield, M.S. Horn, O. Shaer, E.S. Treacy, and J. Zigelbaum.

Abstract:

We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI offers both explanatory and generative power. It provides insights for design, uncovers gaps or opportunities for future research, and leads to the development of improved evaluation techniques.

The paper discusses amongst others the results of a CHI 2006 workshop on the next generation of HCI. The authors provide a framework for classifying, comparing and evaluating new interaction styles. The framework concentrates on four themes used in these emerging interaction styles:

  • Naïve Physics: people have common sense knowledge about the physical world.
  • Body Awareness & Skills: people have an awareness of their own physical bodies and possess skills for controlling and coordinating their bodies.
  • Environment Awareness & Skills: people have a sense of their surroundings and possess skills for negotiating, manipulating, and navigating within their environment.
  • Social Awareness & Skills: people are generally aware of others in their environment and have skills for interacting with them.

These four themes are clarified by the accompanying picture:

Reality-Based Interaction

The workshop proceedings should be interesting as well, with an impressive list of participants (amongst others Hiroshi Ishii, Ben Shneiderman, Steven Feiner, George Fitzmaurice, Desney Tan, Brygg Ullmer and Andy Wilson).

This framework can be useful to evaluate the “intuitiveness” of new interaction methods by measuring the extent to which they use knowledge and skills from the real world.