@device(postscript) @libraryfile(Mathematics10) @libraryfile(Accents) @style(fontfamily=timesroman,fontscale=11) @pagefooting(immediate, left "@c", center "@c", right "@c") @heading(Scalable Trigram Backoff Language Models) @heading(CMU-CS-96-139) @center(@b(Kristie Seymore, Ronald Rosenfeld)) @center(May 1996) @center(FTP: CMU-CS-96-139.ps) @blankspace(1) @begin(text) When a trigram backoff language model is created from a large body of text, trigrams and bigrams that occur few times in the training text are often excluded from the model in order to decrease the model size. Generally, the elimination of n-grams with very low counts is believed to not significantly affect model performance. This project investigates the degradation of a trigram backoff model's perplexity and word error rates as bigram and trigram cutoffs are increased. The advantage of reduction in model size is compared to the increase in word error rate and perplexity scores. More importantly, this project also investigates alternative ways of excluding bigrams and trigrams from a backoff language model, using criteria other than the number of times an n-gram occurred in the training text. Specifically, a difference method has been investigated where the difference in the logs of the original and backed off trigram and bigram probabilities was used as a basis for n-gram exclusion from the model. We have shown that excluding trigrams and bigrams based on a weighted version of this difference method results in better perplexity and word error rate performance than excluding trigrams and bigrams based on counts alone. @blankspace(2line) @begin(transparent,size=10) @b(Keywords:@ )@c @end(transparent) @blankspace(1line) @end(text) @flushright(@b[(21 pages)])