Pervasive Gesture Recognition Using Ambient Light

Funded By National Science Foundation

Overview

As computing devices are becoming smaller, smarter, and more ubiquitous, computing has started to embed into our environment in various forms such as intelligent thermostats, smart appliances, remotely controllable household equipment, and weather-based automated lawn irrigation systems. Consequently, we need new ways to seamlessly and effectively communicate and interact with such ubiquitous and always-available computing devices. A natural choice for such communication and interaction is human gestures because gestures are an integral part of the way humans communicate and interact with each other in their daily lives. This project aims at using ambient modalities, such as ambient light and RF signals, along with cheap commercial off-the-shelf sensors to develop gesture and activity recognition systems in particular and human sensing systems in general. In developing these human-sensing systems, this project has two primary objectives: (1) environment independence, i.e., making the systems agnostic to the characteristics of the environment, such as different conditions, different placements of furniture etc., and (2) user independence, i.e., making the system agnostic to the number of users in an environment and their routine activities.

Publications

2019


ACM IMWUT '19 &
UbiComp '19

Enhancing Indoor Inertial Odometry with WiFi
Raghav H. Venkatnarayan, Muhammad Shahzad
Download: [paper]

2018


ACM IMWUT '18 &
UbiComp '18

Augmenting User Identification with WiFi Based Gesture Recognition
Shaohu Zhang, Muhammad Shahzad
Download: [paper]


MobiSys '18

Multi-User Activity Recognition Using WiFi
Raghav H. Venkatnarayan, Griffin Page, Muhammad Shahzad
Download: [paper]


ACM S3 '18

Recognizing Gestures With Ambient Light
Raghav H. Venkatnarayan, Muhammad Shahzad
Download: [paper]


ACM IMWUT '18 &
UbiComp '18

Gesture Recognition Using Ambient Light
Raghav H. Venkatnarayan, Muhammad Shahzad
Download: [paper]

2017


MobiSys '17

Position and Orientation Agnostic Gesture Recognition Using WiFi
Aditya Virmani, Muhammad Shahzad
Acceptance rate: 18.1%
Download: [paper]

Activities

Gesture recognition using ambient light: There is growing interest in the research community to develop techniques for humans to communicate with the computing that is embedding into our environments. Researchers are exploring various modalities, such as radio-frequency signals, to develop gesture recognition systems. We explore another modality, namely ambient light, and develop LiGest, an ambient light based gesture recognition system. The idea behind LiGest is that when a user performs different gestures, the user's shadows move in distinct patterns. LiGest captures these patterns using a grid of floor-based light sensors and then builds training models to recognize unknown shadow samples. We design a prototype for LiGest and evaluate it across multiple users, positions, orientations and lighting conditions.

Position and Orientation Agnostic Gesture Recognition: WiFi based gesture recognition systems have recently proliferated due to the ubiquitous availability of WiFi in almost every modern building. The key limitation of existing WiFi based gesture recognition systems is that they require the user to be in the same configuration (i.e., at the same position and in same orientation) when performing gestures at runtime as when providing training samples, which significantly restricts their practical usability. We have developed a WiFi based gesture recognition system, namely WiAG, which recognizes the gestures of the user irrespective of his/her configuration. The key idea behind WiAG is that it first requests the user to provide training samples for all gestures in only one configuration and then automatically generates virtual samples for all gestures in all possible configurations by applying our novel translation function on the training samples. Next, for each configuration, it generates a classification model using virtual samples corresponding to that configuration. To recognize gestures of a user at runtime, as soon as the user performs a gesture, WiAG first automatically estimates the configuration of the user and then evaluates the gesture against the classification model corresponding to that estimated configuration.

Multi-User Gesture Recognition Using WiFi: WiFi based gesture recognition has received significant attention over the past few years. However, the key limitation of prior WiFi based gesture recognition systems is that they cannot recognize the gestures of multiple users performing them simultaneously. We address this limitation and propose WiMU, a WiFi based Multi-User gesture recognition system. The key idea behind WiMU is that when it detects that some users have performed some gestures simultaneously, it first automatically determines the number of simultaneously performed gestures and then, using the training samples collected from a single user, generates virtual samples for various plausible combinations of gestures. The key property of these virtual samples is that the virtual samples for any given combination of gestures are identical to the real samples that would result from real users performing that combination of gestures. WiMU compares the detected sample against these virtual samples and recognizes the simultaneously performed gestures.

Enhancing Indoor Inertial Odometry with WiFi: Accurately measuring the distance traversed by a subject, commonly referred to as odometry, in indoor environments is of fundamental importance in many applications such as augmented and virtual reality tracking, indoor navigation, and robot route guidance. While theoretically, odometry can be performed using a simple accelerometer, practically, it is well-known that the distances measured using accelerometers suffer from large drift errors. To adddress this limitation, we propose WIO, a WiFi-assisted Inertial Odometry technique that uses WiFi signals as an auxiliary source of information to correct these drift errors. The key intuition behind WIO is that among multiple reflections of a transmitted WiFi signal arriving at the WiFi receiver, WIO first isolates one reflection and then measures the change in the length of the path of that reflection as the subject moves. By identifying the extent through which the length of the path of that reflection changes, along with the direction of motion of the subject relative to that path, WIO can estimate the distance traversed by the subject using WiFi signals. WIO then uses this distance estimate to correct the drift errors.

Sponsors

Organization: National Science Foundation
Program: Networking Technology and Systems (NeTS)
Sponsor's website of this project