Sparse Recovery: Fundamental Limits, Measurement Constructions And Graph Constraints
Files
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
Sparse recovery explores the sparsity structure inside data and aims to find a low-dimensional representation for a high-dimensional sparse object. Since some form of signal sparsity naturally exists in many applications, sparse recovery can benefit areas like imaging, communication, network monitoring, etc. There has been an exploration of research on the topic compressed sensing, which indicates that an incomplete set of linear projections can represent highdimensional sparse signals, and the unknown sparse signal can be efficiently recovered by ℓ1 -minimization. ℓ1 -minimization can be viewed as a convex relaxation of a NP-hard ℓ0 minimization problem, and its sparse recovery performance has been characterized and extensively analyzed in the literature of compressed sensing. ℓ p minimization ( p ∈ [0, 1)) returns a vector with the least ℓ p quasinorm among all the vectors that can produce the same linear measurements. Though computationally more expensive to solve, ℓ p -minimization is generally believed to have a better sparse recovery performance than ℓ1 -minimization. In Chapter 2, we investigate the sparse recovery ability of ℓ p -minimization. When the measurement matrices are Gaussian, we provide sharp thresholds of the sparsity ratio (percentage of nonzero entries of a vector) that differentiates the success and failure of sparse recovery. We consider its strong recovery performance which requires to recover all the sparse vectors up to certain sparsity; and we also for the first time analyze its weak recovery performance which aims to recover all the sparse vectors on one support with a fixed sign pattern. Surprisingly, our results indicate that although the strong recovery performance improves as p decreases, ℓ1 -minimization has the best weak recovery performance for all p between zero and one. The efficient administration of communication networks relies on accurate estimates of network characteristics such as transmission rates and link queueing delays. Since measuring each component in the network directly can be operationally costly, or even infeasible, one needs to infer system internal characteristics from indirect end-to-end (aggregate) measurements. This topic is known as network tomography. It has a natural connection to sparse recovery, since many network parameters are indeed sparse, e.g., link delays. The marriage of network tomography and sparse recovery offers new directions to explore. In network applications, each measurement should satisfy the network topological constraints such as forming a feasible path or a cycle in a given network topology. Most measurement constructions in sparse recovery, however, assume that any subset of the values can be aggregated together in a measurement. In Chapter 3, we consider constructions of sparse recovery measurements with additional graph topological constraints. Explicit measurement constructions for various graphs are provided, and the number of the constructed measurements is less than the existing estimate of the number of measurements required to recover sparse vectors over graphs. We also propose a measurement construction algorithm and characterize the dependence of the number of measurements required for sparse recovery on the graph structure. Some network parameters such as link delays are nonnegative. A nonnegative sparse signal can be the only nonnegative solution to an underdetermined linear system. In Chapter 4, we discuss this uniqueness property for binary measurement matrices and prove that a sparse vector is a unique nonnegative solution even if its support size is proportional to the dimension.
Journal / Series
Volume & Issue
Description
Sponsorship
Date Issued
Publisher
Keywords
Location
Effective Date
Expiration Date
Sector
Employer
Union
Union Local
NAICS
Number of Workers
Committee Chair
Committee Co-Chair
Committee Member
Tong, Lang
Tardos, Eva