|   | CMU-CS-97-203 Computer Science Department
 School of Computer Science, Carnegie Mellon University
 
    
     
 CMU-CS-97-203
 
Multitask Learning 
Rich Caruana 
September 1997  
Ph.D. Thesis 
CMU-CS-97-203.psCMU-CS-97-203.ps.gz
 CMU-CS-97-203.pdf
 Keywords: Machine learning, neural networks, k-nearest neighbor,
multitask learning, inductive bias, medical decision making, pneumonia,
ALVINN, autonomous vehicle navigation, pattern recognition, inductive
transfer, learning-to-learn
 Multitask Learning is an approach to inductive transfer that improves learning
for one task by using the information contained in the training signals of
other related tasks.  It does this by learning tasks in parallel
while using a shared representation; what is learned for each task can help 
other tasks be learned better.  In this thesis we demonstrate multitask
learning for a dozen problems.  We explain how multitask learning works
and show that there are many opportunities for multitask learning in real
domains.  We show that in some cases features that would normally be used
as inputs work better if used as multitask outputs instead.  We present
suggestions for how to get the most out of multitask learning in artificial
neural nets, present an algorithm for multitask learning with case-based
methods like k-nearest neighbor and kernel regression, and sketch an
algorithm for multitask learning in decision trees.  Multitask learning
improves generalization performance, can be applied in many different kinds
of domains, and can be used with different learning algorithms.  We 
conjecture there will be many opportunities for its use on real-world
problems.
255 pages
 
 |