eCommons

 

(Un)Trustworthy Machine Learning

Other Titles

Abstract

Machine learning methods have become a commodity in the toolkits of both researchers and practitioners. For performance and privacy reasons, new applications often rely on third-party code or pretrained models, train on crowd-sourced data, and sometimes move learning to users’ devices. This introduces vulnerabilities such as backdoors, i.e., unrelated tasks that the model may unintentionally learn when an adversary controls parts of the training data or pipeline. In this thesis, we identify new threats to ML models and propose approaches that balance security, accuracy, and privacy without disruptive changes to the existing training infrastructures.

Journal / Series

Volume & Issue

Description

260 pages

Sponsorship

Date Issued

2023-08

Publisher

Keywords

machine learning; security and privacy

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Shmatikov, Vitaly

Committee Co-Chair

Committee Member

Estrin, Deborah
Belongie, Serge
Lee, Clarence

Degree Discipline

Computer Science

Degree Name

Ph. D., Computer Science

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Attribution 4.0 International

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record