Computer Science Department
School of Computer Science, Carnegie Mellon University
Learning Factors Analysis Learns to Read
James M. Leszczenski
Learning Factors Analysis (LFA) has been proposed as a generic solution to evaluate and compare cognitive models of learning [Cen et al., 2006]. By performing a heuristic search over a space of cognitive models, the researcher may evaluate different representations of a set of skills. This search, however, is computationally intractable for large datasets. We introduce a scalable application of this framework in the context of transfer in reading and demonstrate it upon Reading Tutor data. Using an assumption of a word-level model of learning as a baseline, we apply LFA to determine whether a representation that permits transfer at the level of word roots better reflects actual student learning data. In addition, we demonstrate an approximation to LFA which allows it to scale tractably to large datasets. We find that using a word rootbased model of learning leads to an improved model fit, suggesting students make use of this information in their representation of words. We present evidence based on both model fit and learning rate relationships that low proficiency students tend to exhibit a lesser degree of transfer through the word root representation than higher proficiency students. Additionally, we provide insight into developing metrics designed to classify in advance whether particular operations within LFA will exhibit transfer.