Tag: uiml

AVI 2008

Although it’s a bit late (almost a month after the facts), I finally found some time to blog about Advanced Visual Interfaces (AVI) 2008 in Naples, where Jan presented our paper about Gummy.

Gummy title slide at AVI 2008

I liked it very much: the conference had good quality papers but was still reasonably small (around 150 attendants), and of course the weather and the Italian food were great We arrived on Tuesday which gave us some time to explore the city and take the ferry to Capri (a great suggestion by Robbie).

Piazza del Plebiscito

Vesuvius

Capri

I am not going to discuss the conference program into detail this time, but will just highlight a couple of interesting papers. Possibly one of the coolest papers was “Exploring Video Streams using Slit-Tear Visualizations” by Anthony Tang (video). Another presentation I enjoyed was “TapTap and MagStick: Improving One-Handed Target Acquisition on Small Touch-screens” by Anne Roudaut (video). It seems there is lots of related work in this area (e.g. Shift, ThumbSpace, etc.). Peter Brandl presented two interesting papers: Bridging the Gap between Real Printouts and Digital Whiteboards and Combining and Measuring the Benefits of Bimanual Pen and Direct-Touch Interaction on Horizontal Interfaces. He was brave enough to do an impressive live demo for the first paper Oh, and he also covered the conference in a blog post.

The first paper in our session, titled “A Mixed-Fidelity Prototyping Tool for Mobile Devices” by Marco de Sá, introduced a tool to easily design prototypes and evaluate them in real-life situations. The system was well thought out and serves a real need. I can imagine that we could use this kind of tool in a user-centered UI design course. The second paper in our session was “Model-based Layout Generation” by Sebastian Feuerstack. I already met Sebastian at CADUI 2006. They presented a generic layout model based on constraints. It reminded me a bit of the layout model Yves worked on for our EIS 2007 paper. They used the Cassowary constraint solver, which I also used for my MSc thesis on constraint-based layouts for UIML. Sebastian told me he got the idea from my demo at CADUI 2006. I forgot to add a certain constraint (the layout of the UI was thus underconstrained), which by coincidence had no effect on the user interface everytime I tested it. Of course, when I showed the demo it did have an effect This clearly illustrated that constraint solvers are sometimes unpredictable (see Past, present, and future of user interface software tools by Myers et al.). Sebastian’s solution to this problem was to hide the constraints from the designer and generate them automatically from a graphical layout model.

Juan Manuel Gonzalez Calleros — who I met at CADUI 2006, TAMODIA 2006 and a few other occasions — presented a poster and a paper at the workshop on haptics. He took a few pictures while Jan was presenting (thanks again Juan!). Here are Juan and Jan discussing UsiXML vs UIML

Jan and Juan discussing UsiXML vs UIML

Overall, the comments on our work were positive, although of course one of the biggest problems is still the lack of support for multi-screen interfaces. As Jan is actively hacking on Gummy these days, I don’t think it will take very long for this to be included in the tool

Gummy UI improvements

I am currently working together with Jan on improving the Gummy tool (website still under construction). We have come a long way since Jan wrote the first version for his Master’s thesis. I figured it might be interesting to share a few screenshots of different phases in the development:

Here’s the first version (June 2007):

Gumme screenshot of June 2007

A version with roughly the same UI but lots of architectural improvements (November 2007):

Gummy around the end of 2007

The current version with an improved UI (March 2008):

Gummy screenshot of March 2008

I think the most significant improvement is the new toolbox. Designers can now more easily distinguish between different widgets. In previous versions, some widgets were hard to distinguish. We added labels to each widget in the toolbox, and improved the inline rendering of the widgets. Notice that these images are still not predefined icons, but real widgets rendered to a bitmap. However, in the current version we chose to render them at their optimal size and then scale the images down.

Rendering widgets to bitmaps also forced us to migrate to Windows. We started our development on GNU/Linux using Mono but had to switch due to an annoying bug in Mono’s implementation of Control.DrawToBitmap. Future versions should be able to run in Mono again when this bug is fixed though.

Update: Jan sent me a screenshot of an even older version of Gummy, dated back to December 19, 2006:

Gummy screenshot of December 19, 2006

Full paper on Gummy accepted at AVI 2008

Our hard work before the holidays has paid off We just heard that our full paper submission for AVI 2008 has been accepted.

Gummy

Jan Meskens, Jo Vermeulen, Kris Luyten and Karin Coninx. Gummy for Multi-Platform User Interface Designs: Shape me, Multiply me, Fix me, Use me. To appear in Proceedings of AVI ’08, the working conference on Advanced visual interfaces, Napoli, Italy, May 28-30, 2008.

In this paper we introduce a multi-platform user interface design approach, and Gummy, a design tool to support that approach. This work originated out of Jan Meskens’ Master’s thesis, in which he created a UIML GUI builder. While there are several tools for developing multi-platform user interfaces, these have a number of problems: (1) the resulting user interfaces often lack the aesthetic quality of manually designed interfaces; (2) the tools are not intuitive since designers have to deal with abstractions and do not directly manipulate the user interface design; and (3) designers can not accurately predict what the resulting user interface will look like. Our goal was to allow designers to reuse their skills of existing user interface design tools (such as GUI builders) as much as possible and try to maintain a high level of fidelity (unlike sketch-based design tools).

Gummy design process

We also had a short paper/poster about Gummy accepted to CHI 2008 Work-in-Progress. In this paper we explain how the tool can be used to involve domain experts in the user interface design process.

Gummy domain expert workspace

Kris Luyten, Jan Meskens, Jo Vermeulen and Karin Coninx. Meta-GUI-Builders: Generating Domain-specific Interface Builders for Multi-Device User Interface Creation. To appear in CHI ’08 extended abstracts on Human factors in computing systems, Florence, Italy, April 5-10, 2008.

We received lots of input on the prototypes and early drafts of the papers, so thanks to everyone at our lab who contributed in one way or another Additional thanks go to Karel Robert for creating the Gummy logo (have a look at his portfolio).

More information about the papers can be found at my publications page.

Pluggable typedecoders for Uiml.net

I spent some time the last weeks to support type decoding plugins in Uiml.net. This is mainly useful when you want to interact with applications or web services that have their own types that need to be converted to something the widget set understands. Suppose for example a web service returns a set of Persons, which need to be represented in a list view. The renderer does not know how to transform a Person into an item of a list view, so you need to define a custom component that sits between the renderer and the web service, and can provide this conversion. However, since you don’t know which widget set is used, you have to do this for every possible widget set (e.g. System.Windows.Forms, Gtk#, System.Windows.Forms on the Compact Framework, etc.). Furthermore, it would be better to let the renderer manage this code.

So I created a type decoder plugin system and while I was at it, also cleaned up the code. This resulted in only one general TypeDecoder instance being created in the renderer, while we previously had one instance per backend. Now we have a container class in each backend to host widget set-specific type decoders. This container class get registered with the TypeDecoder, and is in fact also a plugin.

Instead of going into the implementation details, let’s have a look at an excerpt from the System.Windows.Forms container class (SWFTypeDecoders.cs):

37
38
39
40
41
42
43
44
45
46
47
using Uiml.Rendering;
using Uiml.Rendering.TypeDecoding;
 
public class SWFTypeDecoders
{	    
    [TypeDecoderMethod]
    public static System.Drawing.Point DecodePoint(string val)
    {
        string[] coords = val.Split(new Char[] {','});
        return new System.Drawing.Point(Int32.Parse(coords[0]), Int32.Parse(coords[1]));
    }

The only thing we have to do to define a type decoder method is add the [TypeDecoderMethod] attribute and support as a parameter the type we want to convert from. The return type is what we will convert to. In the above listing, the DecodePoint method converts a string to a System.Drawing.Point. The [TypeDecoderMethod] attribute is used to declare that the corresponding method is a type decoder. This way other auxiliary methods will not be registered and won’t pollute the type decoder registry.

To test the implementation, I created a simple class that connects to del.icio.us and gets all my tags. I use this class to show them in a Gtk# UIML GUI. To be able to convert between the XML document that is returned by del.icio.us and the user interface, I wrote a custom type decoder, and connected it to the renderer. I also have a short screencast showing its workings.

Pluggable typedecoders example

I have extended the Uiml.net type decoder to combine existing type decoding methods if direct conversion is not supported. In this example I created a type decoder to convert from a System.Xml.XmlDocument to a Uiml.Constant. But the Gtk.TreeView widget requires a Gtk.TreeModel. The renderer will therefore look for a conversion from a Uiml.Constant to a Gtk.TreeModel and apply the type decoders in sequence. Although we could have converted directly to this data type, this is not as flexible since it is widget set-specific. Although the interface will remain the same, I will probably change the underlying implementation to a graph with the types as vertices, and type decoders as edges to better support these indirect conversions.

Last year’s Uiml.net theses

In my previous post I wrote about this year’s Master’s thesis students that will be working on Uiml.net. However, I hadn’t blogged about what last year’s students accomplished yet.

Ingo Berben chose a Bachelor’s thesis to improve the standards compliance of our renderer. He eventually concentrated on the behavior section, and more specifically on supporting conditions other than events. The movie below shows how this can be used to support form validation. The renderer checks if every field is filled in, and displays a message accordingly.

[youtube:http://www.youtube.com/watch?v=_klRnaicdVg]

Rob Van Roey worked on support for multimodal user interfaces for his Master’s thesis. He implemented a new X+V backend for Uiml.net, which is thereby the first backend that renders to another XML document. The movie shows a multimodal user interface for controlling a smart home, in which Rob turns off the lights and turns on the alarm after saying the correct password.

[youtube:http://www.youtube.com/watch?v=d94WX16ac2s]

Finally, Jan Meskens created a UIML design tool on top of Uiml.net. The workspace of the tool is generated dynamically according to the loaded vocabulary. The movie shows the basic working of this tool. Jan joined our ranks after his studies, and will continue to improve the UIML designer.

[youtube:http://www.youtube.com/watch?v=p26-BtXxKaM]

Ingo’s code is already available in a separate Uiml.net branch, and will soon be merged into the mainline. Jan is also working on integrating his work. When I find some time, I will probably merge Rob’s code as well.