Computer Science Department
School of Computer Science, Carnegie Mellon University


The RADAR Test Methodology:
Evaluating a Multi-Task Machine Learning System with Humans in the Loop

Aaron Steinfeld, Rachael Bennett, Kyle Cunningham, Matt Lahut,
Pablo-Alejandro Quinones, Django Wexler, Daniel P. Siewiorek

Paul Cohen*, Julie Fitzgerald**, Othar Hansson***,
Jordan Hayes***, Mike Pool+, Mark Drummond++

May 2006

Also appears as Human-Computer Interaction Institute
Technical Report CMU-HCII-06-102.


Keywords: Machine Learning, human-computer interaction, artificial intelligence, multi-agent systems, evaluation, human subject experiments

The RADAR project involves a collection of machine learning research thrusts that are integrated into a cognitive personal assistant. Progress is examined with a test developed to measure the impact of learning when used by a human user. Three conditions (conventional tools, Radar without learning, and Radar with learning) are evaluated in a a large-scale, between-subjects study. This paper describes the activities of the RADAR Test with a focus on test design, test harness development, experiment execution, and analysis. Results for the 1.1 version of Radar illustrate the measurement and diagnostic capability of the test. General lessons on such efforts are also discussed.

24 pages

*University of Southern California
**JSF Consulting
+Formerly with IET, Inc.
++SRI International

Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by