Computer Science Department
School of Computer Science, Carnegie Mellon University
Efficient and Robust Signal Approximations
Doru Cristian Balcan
Representation of natural signals such as sounds and images is critically important in a broad range of fields such as multimedia, data communication and storage, biomedical imaging, robotics, and computational neuroscience. Often it is crucial that the representation be efficient, i.e., the signals of interest are encoded economically. It is also desirable that the representation be robust to various types of noise. In this thesis, we advocate several ways to expand current signal encoding approaches via the framework of adaptive representations.
In recent decades, the multiresolution paradigm has provided powerful mathematical and algorithmic tools to signal encoding. In spite of widely proven effectiveness, such methods ignore statistical structure of the class of signals they should represent. On the other hand, high computational costs artificially confine standard linear adaptive statistical models to relatively small block-based encoding scenarios. We show that a good tradeoff between computational complexity and coding efficiency can be achieved via a hybrid encoding scheme: Multiresolution ICA. When applied to natural images the new method significantly outperforms JPEG2000, the current compression standard, which indicates adaptivity as a source of practical improvement for modern coders.
Sparsely encoding large signals via a set of adaptive variable-size shiftable kernels has been studied in several contexts, like efficient auditory coding. One important merit of this paradigm is that, besides efficient adaptive coding, it also provides a direct approach towards an (approximately) shift-invariant representation. This is especially desirable in modeling encoding systems robust to signal shifts, such as biological sensory systems. We study this problem in the case of images and provide contributions leading to fast and superfast algorithms, significantly improving the complexity of the kernel learning process.
The third part of this thesis is a mathematical study of Robust Coding - the problem of optimal linear coding with limited precision units. We characterize optimal solutions in the case of Gaussian channel noise and arbitrarily many encoding units,and derive efficient and stable algorithms for their computation. By expressing the limit of optimization as a closed-form bound, we provide a formal justification of the intuition that noisy encoding units can preserve signal information if sufficiently many are used - a case very relevant to modeling neural encoding systems.