research > Hover Pad

Hover Pad

Interacting with Autonomous and Self-Actuated Displays in Space

Hover Pad

Handheld displays allow exploring volumetric data sets, yet users have to hold them in their hands at all times (a). Self-actuated and autonomous displays, on the other hand, jettison this requirement (b). Our prototype is a first realization of a self-actuated display that can autonomously move and hold its position (c).

With their mobility, handheld displays such as tablets suggest themselves for spatial exploration of information spaces. They can provide a digital window into a much larger three-dimensional information space. This approach can help users explore and understand complex volumetric data sets. Information spaces are typically either centered around the user's body, or anchored to larger displays in the environment. Previous research has assumed that people move the display manually (may it be a tablet computer or a sheet of paper with projection) using their hands. While in motion, the display content changes continuously according to its position and orientation in space.

Manually controlling the display's position and orientation empowers users to navigate to a desired location in the information space. This approach, however, has its shortcomings: (1) users hold the device continuously (occupying at least one hand) which may increase fatigue; (2) exact positioning becomes difficult due to natural hand tremors; and (3) users have to search for information within the information space which might be time-consuming and error-prone (i.e., missing important aspects in the data as users focus on finding a specific item instead). In summary, handheld displays are tied to the user's physical input (here: moving it in space) in order to change their content.

In this work, we set out to free handheld displays from the user's physical input constraints. That is, displays can autonomously move within the information space of a volumetric data set. Unlike previous systems, users do not have to hold the display in their hands; instead the display can move autonomously and maintain its position and orientation. This autonomous actuation can further be combined with manual input by users, e.g., a user moving the display to a position where it then remains. To investigate this new class of displays, we built Hover Pad – a self-actuated display system mounted to a crane. Our setup allows for controlling five degrees of freedom: moving the display along its x-, y-, and z-axes; and changing both pitch (i.e., the displays's horizontal axis) and yaw (i.e., the vertical axis) of the display. With its self-actuated nature, our setup offers three advantages over displays held in hand that are positioned physically and manually by users: (1) the display can move autonomously in space without requiring a user's physical effort; (2) it allows for hands-free interaction as users do not have to hold the display in their hand continuously – thus reducing fatigue in arms and hands as well as using hands for parallel tasks; and (3), it offers enhanced visual stability compared to manually holding a still in a certain position and orientation (i.e., natural hand tremor).

In this paper, we investigate designing interactions with such displays based on our prototype. We focus on how their movement can be controlled either autonomously by the system or by users. We present techniques that, we believe, will benefit from autonomous and self-actuated displays. In summary, our work offers two main contributions: (1) A set of interaction techniques that allow for controlling the display position – either in a semi-autonomous fashion, where the display moves and orients itself on its own following a user's request, or in a manual fashion, where users explicitly control the display's motion. (2) A set of example applications that were built using our prototype setup of Hover Pad. These applications make use of the presented interaction techniques to demonstrate their utility in real-world scenarios. Further, we present a prototyping toolkit that allows for rapid prototyping of such displays – including a detailed description of how such displays can be constructed. This toolkit enables developers to make use of the presented control mechanisms in a simplified way. Note that our main contribution lies in the engineering domain to enable the exploration of autonomous and selfactuated displays. We do not present a user study of exploring volumetric data using tablet computers, as this has been explored already.

Publications

Hover Pad: interacting with autonomous and self-actuated displays in space

Hover Pad: interacting with autonomous and self-actuated displays in space

Seifert, J., Boring, S., Winkler, C., Schaub, F., Schwab, F., Herrdum, S., Maier, F., Mayer, D., and Rukzio, E.

In ACM Symposium on User Interface Software and Technology - UIST'2014. Honolulu, HI, USA, ACM Press, 9 pages, October 5-8.

Videos