Identifying patterns or common elements of existing user interfaces can help us to (i) better understand existing interfaces, and (ii) break out of the mold and create a novel (and possibly useful) kind of interface. Below, I present a model which, despite being simplistic, can be used to describe a large class of existing software applications. I then consider implications of and problems with this model, and consider existing or imagined cases which do not fit the model.
The elements of the model are depicted above: the user wishes to interact with some content (or data), and the user interface (UI) sits between the two, acting as the surface of contact between the user and content. In general, the UI consists of one or more viewers and one or more controls.
The viewer renders the content and serves as the channel for output; it is through the viewer that the user experiences the content. Note that the output need not be visual -- it could be any information that is sensed by the user (e.g. audio, haptic, etc).
The user drives input through the controls. Controls could be physical (e.g. buttons, dials) or virtual (e.g. widgets), and could be activated by touch, voice, etc.
Below, 3 distinct types of controls of are identified.
Type 1. A control that operates on the content has a direct effect on the content. Controls that create, edit, or modify content fall into this category.
Type 2. A control that operates on the viewer changes the way the content is viewed, experienced, or presented. The change is not permanent, and indeed is typically transient. It may be argued that Type 2 controls change the behaviour of the UI (since the viewer is part of the UI), however this is only true in a weak sense.
Type 3. Finally, a control that operates on the controls changes the configuration or behaviour of the controls. Examples that fall into this category are controls that make other controls appear or disappear, or that switch between input modes or mappings. These are strong changes to the UI's behaviour -- they change the UI more deeply than Type 2 controls.
Identify the content, viewer, and different types of controls in the following cases:
Now, pick a few examples from the physical world, and see whether this model applies. Consider both electrical and non-electrical devices, for example:
Since Type 3 controls operate on controls, they could be called meta-controls, which leads us to the notion of a meta-interface: an interface used to control, change, or otherwise interact with another interface.
The notion of a meta-interface is perhaps clearest in a graphical interface builder such as Macromedia Flash, where the content is itself another interface. However, there are also more mundane examples: interfaces for selecting keyboard shortcuts, or even for changing the colours or "skin" of an application, are arguably meta-interfaces.
Another observation to be made is that the distinction between Type 3 controls and other controls is relative. Consider, for example, the text editor vi. When in insertion mode, the 'r' key inserts a letter r. However, hitting Escape switches vi to command mode, where the 'r' key is used to replace a character. One way to look at this situation is that the Escape key changes the behaviour of the interface (i.e. the rest of the keyboard), and is therefore a Type 3 control, or part of the meta-interface.
On the other hand, if we consider the Escape key to be part of the interface, then the behaviour of the interface as a whole never really changes. Under this second interpretation, there appears to be in fact no meta-interface at all.
A more extreme example is the Shift key: it arguably changes the behaviour of alphabetic keys on the keyboard, but the change is so transient that it may seem a stretch to call this a "strong" or "deep" change to the interface. So, is the Shift key a Type 3 control, or is it just part of the the other (Type 1) controls on the keyboard ?
The difference between each type of controls depends on where we draw the line between interface and meta-interface. However, I feel that the notions of Type 3 controls and meta-interfaces are still useful, despite being relative.
Can we think of cases where the distinction between interface and content is blurred ?
Can you think of other examples where this model is bent,
or perhaps does not apply at all ?
Postscriptum: (August 2003)
Since writing down these ideas,
some related previous work has come to my attention.
The "content-viewer-controls" breakdown described here seems to be the same
or similar to Krasner and Pope's Model-View-Controller paradigm [1]
and Björk et al.'s data (D), visualization (V) and interactivity (I)
[2].
The distinction between Type 1 and Type 2 controls is also made by
Björk et al. [2] (interactions that modify the data
versus interactions that modify the visualization)
and by Chi and Riedl [3] (value operations versus view operations).
The notion of Type 3 controls, and meta-interfaces, is touched on
by Beaudouin-Lafon and Mackay [4].
[1] | G. E. Krasner and S. T. Pope (1988). A Description of the Model-View-Controller User Interface Paradigm in the Smalltalk-80 System. Journal of Object Oriented Programming 1(3):26-49, 1988. |
[2] | Staffan Björk, Lars Erik Holmquist, Johan Redström (1999). A Framework for Focus+Context Visualization. Proceedings of IEEE InfoVis '99 Symposium on Information Visualization. |
[3] | Ed Chi and John T. Riedl (1998). An operator interaction framework for visualization systems. Proceedings of IEEE Symposium on Information Visualization, 1998. |
[4] | Michel Beaudouin-Lafon and Wendy E. Mackay (2000). Reification, Polymorphism and Reuse: Three Principles for Designing Visual Interfaces. Proceedings of ACM AVI 2000 Advanced Visual Interfaces. |
Copyright ( C ) Michael McGuffin
May 6, 2002
Updated August 2003