eCommons

 

An Information-Theoretic Approach to Optimal Neural-Network-Based Compression

Other Titles

Abstract

Modern artificial-neural-network-based (ANN-based) compressors have recently achieved notable successes on compressing multimedia formats such as images. This is despite information-theoretic near-optimality results of the linear transform coding paradigm, which forms the basis of existing standard lossy compressors such as JPEG, AAC etc., for stationary Gaussian sources with respect to mean-squared error distortion (at high rate). This thesis attempts to fill in some of the gaps in our theoretical understanding of modern ANN-based compressors. We list our contributions below. We propose a set of sources that obey the manifold hypothesis, i.e., that are high-dimensional in input space but lie on a low-dimensional manifold. We analytically derive optimal entropy-distortion tradeoffs for such sources and test the performance of ANN-based compressors on them. We find that for some sources that exhibit circular symmetry, ANN-based compressors are suboptimal. Our fix to this issue involves embedding Random Fourier Features (RFFs) before passing the input through either encoding or decoding nonlinear transforms. As the set of manifold sources gets more sophisticated, exact characterization of entropy-distortion tradeoffs can get challenging. We focus on the low-rate regime and develop general methods for one-bit quantization of sources in an arbitrary Hilbert space. Using these methods, we derive optimal one-bit quantizers for several examples including elliptical distributions and a manifold source that we proposed. We also study the low-rate asymptotics for variable-rate dithered quantization for vector Gaussian sources. We revisit the ubiquitous autoencoder architecture and analyze dimensionality-reducing linear autoencoders that are often used for general-purpose lossy compression. We propose an alternate autoencoder formulation that embraces the compression point of view by constraining the number of bits required to represent the output of the encoder. Our characterization of the optimal solution to this non-convex constrained linear autoencoder involves generalizing to any Schur-concave constraint on the variances of the encoder output. We provide experimental validation of our autoencoder-based variable-rate compressor.

Journal / Series

Volume & Issue

Description

125 pages

Sponsorship

Date Issued

2023-08

Publisher

Keywords

autoencoders; data compression; information theory; machine learning; neural networks; neural-network-compression

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Acharya, Jayadev

Committee Co-Chair

Committee Member

Wagner, Aaron
Weinberger, Kilian
Goldfeld, Ziv

Degree Discipline

Electrical and Computer Engineering

Degree Name

Ph. D., Electrical and Computer Engineering

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Attribution 4.0 International

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record