CMU-CS-06-125
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-06-125

The RADAR Test Methodology:
Evaluating a Multi-Task Machine Learning System with Humans in the Loop

Aaron Steinfeld, Rachael Bennett, Kyle Cunningham, Matt Lahut,
Pablo-Alejandro Quinones, Django Wexler, Daniel P. Siewiorek

Paul Cohen*, Julie Fitzgerald**, Othar Hansson***,
Jordan Hayes***, Mike Pool+, Mark Drummond++

May 2006

Also appears as Human-Computer Interaction Institute
Technical Report CMU-HCII-06-102.

CMU-CS-06-125.pdf


Keywords: Machine Learning, human-computer interaction, artificial intelligence, multi-agent systems, evaluation, human subject experiments

The RADAR project involves a collection of machine learning research thrusts that are integrated into a cognitive personal assistant. Progress is examined with a test developed to measure the impact of learning when used by a human user. Three conditions (conventional tools, Radar without learning, and Radar with learning) are evaluated in a a large-scale, between-subjects study. This paper describes the activities of the RADAR Test with a focus on test design, test harness development, experiment execution, and analysis. Results for the 1.1 version of Radar illustrate the measurement and diagnostic capability of the test. General lessons on such efforts are also discussed.

24 pages

*University of Southern California
**JSF Consulting
***Thinkbank
+Formerly with IET, Inc.
++SRI International


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu