|
CMU-CS-25-114 Computer Science Department School of Computer Science, Carnegie Mellon University
Mimetic Initialization for Deep Neural Networks Asher James Trockman Ph.D. Thesis May 2025
While neural network weights are typically initialized randomly from univariate distributions, pre-trained weights often have visually-discernible multivariate structure. We propose a technique called "mimetic initialization" that aims to replicate such structures when initializing convolutional networks (CNNs), Transformers, and State Space Models (SSMs). For CNNs, we handcraft a class of multivariate Gaussian distributions to initialize filters for depthwise convolutional layers; for Transformers, we initialize the query and key weights for self-attention layers such that their product approximates the identity; and for SSMs, we initialize layers to approximate simple linear attention. Mimetic initialization substantially reduces training time and increases final accuracy on various common small-scale benchmarks. Our technique enables us to almost close the gap between untrained and pre-trained Vision Transformers on small datasets like CIFAR-10, achieving up to a 6% gain in accuracy through initialization alone. For convolutional networks like ConvMixer and ConvNeXt, we observe improvements in accuracy and reductions in training time, even when convolutional filters are frozen (untrained) after initialization. For SSMs, mimetic initialization substantially improves generalization abilities on synthetic language tasks like copying and associative recall. Overall, our findings suggest that some of the benefits of pre-training may be explained by it serving as a good initialization, whose structure is simple enough to (at least partially) capture by hand in closed form. 117 pages
Thesis Committee:
Srinivasan Seshan, Head, Computer Science Department
|
Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |