eCommons

 

(Un)Trustworthy Machine Learning

dc.contributor.authorBagdasaryan, Eugene
dc.contributor.chairShmatikov, Vitalyen_US
dc.contributor.committeeMemberEstrin, Deborahen_US
dc.contributor.committeeMemberBelongie, Sergeen_US
dc.contributor.committeeMemberLee, Clarenceen_US
dc.date.accessioned2024-04-05T18:46:06Z
dc.date.available2024-04-05T18:46:06Z
dc.date.issued2023-08
dc.description260 pagesen_US
dc.description.abstractMachine learning methods have become a commodity in the toolkits of both researchers and practitioners. For performance and privacy reasons, new applications often rely on third-party code or pretrained models, train on crowd-sourced data, and sometimes move learning to users’ devices. This introduces vulnerabilities such as backdoors, i.e., unrelated tasks that the model may unintentionally learn when an adversary controls parts of the training data or pipeline. In this thesis, we identify new threats to ML models and propose approaches that balance security, accuracy, and privacy without disruptive changes to the existing training infrastructures.en_US
dc.identifier.doihttps://doi.org/10.7298/21m1-3k83
dc.identifier.otherBagdasaryan_cornellgrad_0058F_13889
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:13889
dc.identifier.urihttps://hdl.handle.net/1813/114569
dc.language.isoen
dc.rightsAttribution 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subjectmachine learningen_US
dc.subjectsecurity and privacyen_US
dc.title(Un)Trustworthy Machine Learningen_US
dc.typedissertation or thesisen_US
dcterms.licensehttps://hdl.handle.net/1813/59810.2
thesis.degree.disciplineComputer Science
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Computer Science

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bagdasaryan_cornellgrad_0058F_13889.pdf
Size:
6.28 MB
Format:
Adobe Portable Document Format