Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Characterizing and Mitigating Threats to Trust and Safety Online

Characterizing and Mitigating Threats to Trust and Safety Online

File(s)
Hua_cornellgrad_0058F_13131.pdf (667.92 KB)
Permanent Link(s)
https://doi.org/10.7298/ghbb-ch19
https://hdl.handle.net/1813/111970
Collections
Cornell Theses and Dissertations
Author
Hua, Yiqing
Abstract

In the past decade, social media platforms became increasingly important in everyone's lives. However, the services that they provide are constantly abused by some of their users to create real human harm. Such abusive activities include online harassment, spreading mis/disinformation, producing hate speech, and many others. These harmful user behaviors undermine public trust and may discourage users from engaging with the platforms, with consequences that impact the online information ecosystem and our society as a whole. Therefore, it is critical to understand abuse and design solutions to mitigate these challenges to support trust and safety online. In this dissertation, I discuss my work on characterizing and mitigating abusive behaviors online. To understand such behaviors at the scale of modern-day social media, we need scalable and robust detection methods. However, such methods often fail to address the subtlety of abuse. Taking online harassment as an example, adversaries may use target-specific attacks that are difficult to be spotted by automatic detection algorithms, as these algorithms are trained on a general harassment corpus where such attacks don't exist. We address this issue by using contextually-aware analysis, using the adversarial interactions with U.S. political candidates on Twitter in 2018 as a case study. Further, by combining qualitative and quantitative methods, we analyze the users who engage in the adversarial interactions, showing that some tend to seek out conflicts. While abuse mitigation in public platforms receives more and more attention from both the research community and industry practitioners, the same mitigation strategies are not applicable in private settings. For example, one common practice by public platforms is to scan user communications for known policy-violating content, in order to react to such violations in a timely manner. The direct application of such practice in private settings is forbidden, as it violates user privacy. However, abuse in private communications should not be left unmitigated. To this end, we propose mitigation solutions that enable privacy-preserving client-side detection of content that is similar to known bad content. The proposed protocol reveals the detection result to the client, without notifying the server. The idea is to improve users' agency when facing abuse such as mis/disinformation campaigns, to obtain more context about the content that they receive without sacrificing privacy, and to make informed decisions on their own. To realize this protocol, we present and formalize the concept of similarity-based bucketization, allowing efficient computation on large datasets of known misinformation images.

Description
146 pages
Date Issued
2022-08
Keywords
misinformation
•
online harassment
•
trust and safety
Committee Chair
Ristenpart, Thomas
Committee Member
Naaman, Mor
Cardie, Claire T.
Degree Discipline
Computer Science
Degree Name
Ph. D., Computer Science
Degree Level
Doctor of Philosophy
Rights
Attribution 4.0 International
Rights URI
https://creativecommons.org/licenses/by/4.0/
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/15578781

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance