eCommons

 

An Information-Theoretic Approach To Practical Side Channel Protection

Other Titles

Abstract

Side channel attacks are quickly becoming a serious problem in many computational security contexts. Presently, side channel countermeasures are primarily developed per case study, and the degree of mathematical rigor varies across studies. In order to address this methodological gap in the study of side channels, we use information theory-driven mathematical models to form a more rigorous foundation for the study of common classes of side channels, including one-shot and replayable channels. Due to the broad selection of competing side channel leakage metrics in the literature, we take both an information-theoretic and an empirical approach to analyzing the usefulness and use cases of the most common metrics, including the recently proposed maximal leakage metric. Using maximal leakage as an operationally interpretable upper bound on side channel leakage, we show that linear combinations of no more than two deterministic protection schemes against one-shot attacks are optimal, which is a surprising result. Conventional approaches to protecting against side channels that add randomized noise turn out to be suboptimal. In addition to one-shot attacks, we also study the class of replayable attacks in which the adversary can test the protection scheme many times with the same input. Here, we develop theoretical information bounds governing how the total leakage increases with the number of replays. The total leakage must naturally be upper bounded, but existing information-theoretic work provides only unbounded characterizations of how the total leakage increases with the number of replays. We prove that the total leakage approaches its upper limit exponentially fast, and the exponent can be relatively easily computed given the protection scheme. Finally, we study inference-phase attacks on dynamically pruned convolutional neural networks (CNNs), utilizing our theoretical results. Dynamic pruning is an increasingly popular technique for optimizing the execution time of the inference phase. We show that these techniques create a highly informative side channel that can be used to perform one-input-one-secret attacks, given indirect access to the pruning decisions. Moreover, we show that protecting this channel without losing the execution time savings from the pruning is naturally difficult because of the characteristics of the channel. Utilizing our studies of the one-shot and replayable attacks, we devise protection schemes that can significantly reduce the leakage from these attacks.

Journal / Series

Volume & Issue

Description

173 pages

Sponsorship

Date Issued

2021-12

Publisher

Keywords

Cybersecurity; Machine Learning; Neural Networks; Side Channels

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Suh, Edward

Committee Co-Chair

Committee Member

Wagner, Aaron B.
Shi, Runting

Degree Discipline

Electrical and Computer Engineering

Degree Name

Ph. D., Electrical and Computer Engineering

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Rights URI

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record