@device(postscript) @libraryfile(Mathematics10) @libraryfile(Accents) @style(fontfamily=timesroman,fontscale=11) @pagefooting(immediate, left "@c", center "@c", right "@c") @heading(Modular Neural Networks for Speech Recognition) @heading(CMU-CS-96-203) @center(@b(Jurgen Fritsch)) @center(August 1996@foot) @center(FTP: CMU-CS-96-203.ps.gz) @blankspace(1) @begin(text,spacing=1) In recent years, researchers have established the viability of so called hybrid NN/HMM large vocabulary, speaker independent continuous speech recognition systems, where neural networks (NN) are used for the estimation of acoustic emission probabilities for hidden Markov models (HMM) which provide statistical temporal modeling. Work in this direction is based on a proof, that neural networks can be trained to estimate posterior class probabilities. Advantages of the hybrid approach over traditional mixture of Gaussians based systems include discriminative training, fewer parameters, contextual inputs and faster sentence decoding. However, hybrid systems usually have training times that are orders of magnitude higher than those observed in traditional systems. This is largely due to the costly, gradient-based error-backpropagation learning algorithm applied to very large neural networks, which often requires the use of specialized parallel hardware. This thesis examines how a hybrid NN/HMM system can benefit from the use of modular and hierarchical neural networks such as the hierarchical mixtures of experts (HME) architecture. Based on a powerful statistical framework, it is shown that modularity and the principle of divide-and-conquer applied to neural network learning reduces training times significantly. We developed a hybrid speech recognition system based on modular neural networks and the state-of-the-art continuous density HMM speech recognizer JANUS. The system is evaluated on the English Spontaneous Scheduling Task (ESST), a 2400 word spontaneous speech database. We developed an adaptive tree growing algorithm for the hierarchical mixtures of experts, which is shown to yield better usage of the parameters of the architecture than a pre-determined topology. We also explored alternative parameterizations of expert and gating networks based on Gaussian classifiers, which allow even faster training because of near-optimal initialization techniques. Finally, we enhanced our originally context independent hybrid speech recognizer to model polyphonic contexts, adopting decision tree clustered context classes from a Gaussian mixtures system. @blankspace(2line) @begin(transparent,size=10) @b(Keywords:@ )@c @end(transparent) @blankspace(1line) @end(text) @flushright(@b[(111 pages)])