research > WatchSense

WatchSense

On- and Above-Skin Input Sensing through a Wearable Depth Sensor

WatchSense

The technical capabilities of WatchSense enable more expressive interactions such as purely mid-air (top right), purely touch (bottom left), and combinations of them.

We contribute a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype – which runs in real-time on consumer mobile devices – enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.

The basic assumption in our approach is that smartwatches will embed a depth sensor on their side, overseeing the back of the hand (BOH) and the space above it. We contribute to an emerging line of research exploring richer use of finger input sensed through a wearable device. In particular, we look at smartwatches, which have previously been supplemented by mid-air finger input. Recent papers propose using the palm or forearm for gestures or touch

input. This enlarges the size of input space in which gestures can be comfortably performed. However, previous papers focused on either touch or mid-air interactions. We address the combination of these two modalities, with the aim of increasing the efficiency and expressiveness of input. Recent advances in depth sensor miniaturization have led to the exploration of using both touch and mid-air interactions above smartphones. To our knowledge, there is no work that explores the use of both touch and mid-air input in smaller, wearable form factor devices such as smartwatches.

Our second contribution is to address the technical challenges that arise from sensing of fingers that touch the skin and/or hover above the skin near a smartwatch with an embedded depth sensor. Recent improvements to real-time finger tracking in mid-air cannot directly be employed due the oblique camera view and resulting occlusions which are common in body-worn cameras. To address these challenges we propose a novel algorithm that combines machine learning, image processing, and robust estimators.

Our novel method allows joint estimation of 3D fingertip positions, detection of finger identities, and detection of fingertips touching the back of the hand (BOH). Unlike previous work, our approach also detects finger identities which can further increase input expressiveness. Our prototype, which mimics the viewpoint of future embedded depth sensors, can detect fingertips, their identities, and touch events in real-time (> 250 Hz on a laptop and 40 Hz on a smartphone). Additionally, technical evaluations show that our approach is accurate and robust for users with varying hand dimensions.

The capability enabled by our approach allows for simultaneous touch and mid-air input using multiple fingers on and above the BOH. A unified sensing approach that supports both touch and mid-air, as well as finger identity detection is not only beneficial for users but also provides more interaction design possibilities. We show through several applications that this novel input space (or volume) can be used for interaction on the move (e.g., with the smartwatch itself or with other nearby devices), complementing solutions with touch or mid-air alone. In summary, our paper contributes by: (1) exploring the interaction space of on- and above-skin input near wearable devices, particularly smartwatches; (2) addressing the technical challenges that make camera-based sensing of finger positions, their identities, and on-skin touch a hard problem; and (3), demonstrating the feasibility of our approach using a prototype, technical evaluations, and interactive applications.

Publications

WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor

WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor

Sridhar, S., Markussen, A., Oulasvirta, A., Theobalt, C., and Boring, S.

To appear: In ACM International Conference on Human Factors in Computing Systems - CHI 2017. Denver, CO, USA, ACM Press, 9 pages, May 6-11.

Videos