It turns out that converting a bzr repository with a single branch to git is quite easy. Here’s how I did it (after installing the latest versions of both bzr and git):
Of course, repo.bzr is your old bzr repository here, while repo.git is your newly created git repository. The fast-export/fast-import commands convert the repository’s history from bzr to git. To populate our directory with the bzr repository’s files, however, we also need to perform a hard reset. If you need to convert multiple bzr branches to git, have a look at this post.
I started experimenting with distributed version control systems (DVCS) while working on my MSc thesis (around 2005). I originally maintained Cassowary.net using darcs (a DVCS written in Haskell), and switched to bzr later. Back in the early days of Uiml.net, we used CVS, which was horrible at moving or renaming files, not to mention the inability to work offline and still commit your work.
I preferred darcs and bzr over git at the time because they were a lot easier to use. These days, git seems to have catched up to bzr in that regard. Back in 2006, I also did a few performance tests and found that bzr was a lot slower than git. Things seem to have improved somewhat, but git is still The King of Speed. Oh, and GitHub is great!
I just wanted to share this movie about the Python programming language. Although it’s a bit dated, I feel the movie still provides a nice overview of the benefits of Python, and explains why the language is good for educational purposes. It can be found at python.org.
Quite a few things happened since I last posted about my research. Here is a (not so short) summary of what happened during my blogging leave of absence
Ubicomp 2009
Our work on supporting why and why not questions to improve end-user understanding in Ubicomp environments was accepted as a poster to Ubicomp 2009
Jo Vermeulen, Geert Vanderhulst, Kris Luyten, and Karin Coninx. Answering Why and Why Not Questions in Ubiquitous Computing. To appear in the Ubicomp ’09 Conference Supplement (Poster), Orlando, Florida, US, September 30th – October 3rd, 2009, 3 pages.
Abstract: Users often find it hard to understand and control the behavior of a Ubicomp system. This can lead to loss of user trust, which may hamper the acceptance of these systems. We are extending an existing Ubicomp framework to allow users to pose why and why not questions about its behavior. Initial experiments suggest that these questions are easy to use and could help users in understanding how Ubicomp systems work.
And here I am presenting in the One Minute Madness session:
Karel Robert helped me create a video for the One Minute Madness session that would stand out. Although it might have been a bit too attention-grabbing, I certainly had fun making it and presenting in the Madness session.
Being a student volunteer was lots of fun! I got to meet a lot of interesting people, and still had the opportunity to follow most of the sessions. I also explored the parks together with a few of the other volunteers (Ubicomp 2009 was held in Disney World), and we even played beach volley on the last day
Jo Vermeulen, Jonathan Slenders, Kris Luyten, and Karin Coninx. To appear in the Proceedings of AmI ’09, the Third European Conference on Ambient Intelligence, Salzburg, Austria, November 18th – 21st, 2009, Springer LNCS, 10 pages.
Abstract: The design ideal of the invisible computer, prevalent in the vision of ambient intelligence (AmI), has led to a number of interaction challenges. The complex nature of AmI environments together with limited feedback and insufficient means to override the system can result in users who feel frustrated and out of control. In this paper, we explore the potential of visualizing the system state to improve user understanding. We use projectors to overlay the environment with a graphical representation that connects sensors and devices with the actions they trigger and the effects those actions produce. We also provided users with a simple voice-controlled command to cancel the last action. A small first-use study suggested that our technique could indeed improve understanding and support users in forming a reliable mental model..
Basically, our technique visualizes the different events that occur in a Ubicomp environment, and shows how these events can lead to the system taking actions on behalf of the user and what effects these actions have. Here is a video of the technique:
The AmI 2009 conference takes place in Salzburg in about three weeks.
Talk at SIGCHI.be
I also submitted a paper to SIGCHI.be‘s (the Belgian SIGCHI chapter) 2009 Fall Conference on New Communities. The paper was titled Improving Intelligibility and Control in Ubicomp Environments, and motivated the need for intelligibility and control in Ubicomp while also giving a short summary of the Ubicomp 2009 poster and AmI 2009 paper.
Although I already created a Facade project page a while ago and made the code available through Bazaar, I never announced it here. To make it easier to try the script out, I also uploaded an archive of the code. This version includes a simple Python script that works on Windows (simple_winclient.py) that performs face detection.
I noticed today that Jinwei Gu, a PhD student at the University of Columbia and teaching assistant for the course COMS W4737 (E6737) Biometrics, included my blog post in a list of links to resources for doing face detection.
Last year, two students improved the presence detection in Facade during their Bachelor’s theses.
Bram Bonné worked on the network aspect, to allow users to have their presence detected on multiple computers. When users would move from one computer to another (or to a mobile device), their messages would automatically be routed to their current machine.
Here are a few screenshots from Bram’s thesis:
Bram also created a demo video of his system:
Kristof Bamps worked on supporting a more diverse set of sensors (e.g. presence of a Bluetooth phone, keyboard and mouse input). He used a decision tree to model all these sensors, and determine the user’s correct presence based on the collected information.
Two screenshots from Kristof’s thesis:
Kristof’s demo video:
I have unfortunately not yet found the time yet to merge Kristof’s and Bram’s code into the main Facade branch, but this should not be too big of a problem.
Apparantely, there’s a Communications of the ACM group blog now, called blog@CACM. There is also a blog roll that includes the blog of Daniel Lemire, which happens to be one of my favorite research blogs. Although Daniel works in a different subdiscipline of computer science, I enjoy reading his research advice and interesting viewpoints on the process of doing research.
The group blog features an interesting post by Tessa Lau, titled Three Misconceptions About Human-Computer Interaction, which raises a few interesting points. In my opinion, HCI is much more fundamental to creating interactive systems than people usually believe. In this context, I would like to refer to an interview with Patrick Baudisch that I recently read, in which he explains how he got started in HCI:
Doantam: How did you get started working on human-computer interaction?
Patrick: Without knowing it. I was a Ph.D. student in Darmstadt, Germany and worked on user interfaces for information filtering systems. A friend of mine saw my work and said “oh, I did not know you were in HCI, too”.