eCommons

 

Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication

dc.contributor.authorJakesch, Maurice
dc.contributor.chairNaaman, Moren_US
dc.contributor.committeeMemberMatias, Jorgeen_US
dc.contributor.committeeMemberMacy, Michaelen_US
dc.date.accessioned2023-03-31T16:37:55Z
dc.date.available2023-03-31T16:37:55Z
dc.date.issued2022-12
dc.description198 pagesen_US
dc.description.abstractLarge language models like GPT-3 are increasingly becoming part of human communication. Through writing suggestions, grammatical assistance, and machine translation, the models enable people to communicate more efficiently. Yet, we have a limited understanding of how integrating them into communication will change culture and society. For example, a language model that preferably generates a particular view may influence people's opinions when integrated into widely used applications. This dissertation empirically demonstrates that embedding large language models into human communication poses systemic societal risks. In a series of experiments, I show that humans cannot detect language produced by GPT-3, that using large language models in communication may undermine interpersonal trust, and that interactions with opinionated language models change users' attitudes. I introduce the concept of AI-Mediated Communication–where AI technologies modify, augment, or generate what people say–to theorize how the use of large language models in communication presents a paradigm shift from previous forms of computer-mediated communication. I conclude by discussing how my findings highlight the need to manage the risks of AI technologies like large language models in ways that are more systematic, democratic, and empirically grounded.en_US
dc.identifier.doihttps://doi.org/10.7298/pdqm-5n74
dc.identifier.otherJakesch_cornellgrad_0058_13353
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:13353
dc.identifier.urihttps://hdl.handle.net/1813/112933
dc.language.isoen
dc.subjectAI ethicsen_US
dc.subjectHuman-AI interactionen_US
dc.subjectLarge language modelsen_US
dc.subjectRisk assessmenten_US
dc.subjectSocial influenceen_US
dc.titleAssessing the Effects and Risks of Large Language Models in AI-Mediated Communicationen_US
dc.typedissertation or thesisen_US
dcterms.licensehttps://hdl.handle.net/1813/59810.2
thesis.degree.disciplineInformation Science
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Information Science

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jakesch_cornellgrad_0058_13353.pdf
Size:
3.33 MB
Format:
Adobe Portable Document Format