|
CMU-HCII-06-102
Human-Computer Interaction Institute
School of Computer Science, Carnegie Mellon University
CMU-HCII-06-102
The RADAR Test Methodology:
Evaluating a Multi-Task Machine Learning System with Humans in the Loop
Aaron Steinfeld, Rachael Bennett, Kyle Cunningham, Matt Lahut
Pablo-Alejandro Quinones, Djano Wexler, Daniel P. Siewiorek
Paul Cohen*, Julie Fitzgerald**, Othar Hansson***, Jordan Hayes***,
Mike Pool+, Mark Drummond++
May 2006
Also appears as Computer Science Department
Technical Report CMU-CS-06-125
CMU-HCII-06-102.pdf
Keywords: Machine learning, human-computer interaction, artificial
intelligence,multi-agent systems, evaluation, human subject experiments
The RADAR project involves a collection of machine learning research thrusts
that are integrated into a cognitive personal assistant. Progress is
examined with a test developed to measure the impact of learning when used
by a human user. Three conditions (conventional tools, Radar without
learning, and Radar with learning) are evaluated in a alarge-sclae,
between-subjects study. This paper describes the activities of the RADAR
Test with a focus on test design, test harness development, experiment
execution, and analysis. Results for the 1.1 version of Radar illustrate
the measurement and diagnostic capability of the test. General lessons
on such efforts are also discussed.
24 pages
*University of Southern California
**JSF Consulting
***Thinkbank
+Formerly with IET, Inc.
++SRI International
|