CMU-HCII-06-100
Human-Computer Interaction Institute
School of Computer Science, Carnegie Mellon University



CMU-HCII-06-100

Constructing and Evaluating Sensor-Based
Statistical Models of Human Interruptibility

James Anthony Fogarty

January 2006

Ph.D. Thesis

CMU-HCII-06-100.pdf


Keywords: Human interruptibility, interruptions, office workers, sensor-based statistical models, Wizard of Oz sensor development, Subtle, AmIBusy Prompter, Whistle


While people can typically make a rapid assessment of another person's interruptibility, current systems generally have no way to consider whether an interruption is appropriate. Systems therefore tend to interrupt at inappropriate times or unduly demand attention. Sensor-based statistical models of human interruptibility are one approach to addressing this problem. In a series of studies, we examine the feasibility and robustness of sensor-based statistical models of human interruptibility, creating models that perform better than human observers. We then present a tool to enable non-expert development of applications that use sensor-based statistical models of human situations.

Our first study collects audio and video recordings in the normal work environments of several office workers. We measure their interruptibility by collecting interruptibility self-reports via experience sampling. We then use a Wizard of Oz method to examine the recordings and simulate many potential sensors. Building statistical models from these simulated sensors, we are able to evaluate potential sensors without actually building them. In our second study, human observers view the recordings and estimate the interruptibility of the office workers. Statistical models based on our simulated sensors perform better than these human observers. Our third study examines the robustness of this result by implementing and deploying real sensors with a more diverse set of office workers. While different sensors are more predictive for different types of office workers, even a general model performs better than the human observers. Because these first three studies are dominated by social engagement, our fourth study explicitly examines task engagement. We show that low-level programming environment events can be used to model when a programmer will choose to defer an interruption.

We then develop Subtle, a tool to enable further research into how human computer interaction can best benefit from sensor-based statistical models of human situations. With an extensible sensing library, fully-automated iterative feature generation, and support for model deployment, Subtle enables non-expert development of applications that use sensor-based statistical models of human sit uations. Subtle allows human computer interaction researchers to focus on compelling applications and datasets, rather than the difficulties of collecting appropriate sensor data and learning statistical models. Finally, we present a summary of contributions and plans for future work.

167 pages


Return to: SCS Technical Report Collection
School of Computer Science homepage

This page maintained by reports@cs.cmu.edu