Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication
dc.contributor.author | Jakesch, Maurice | |
dc.contributor.chair | Naaman, Mor | en_US |
dc.contributor.committeeMember | Matias, Jorge | en_US |
dc.contributor.committeeMember | Macy, Michael | en_US |
dc.date.accessioned | 2023-03-31T16:37:55Z | |
dc.date.available | 2023-03-31T16:37:55Z | |
dc.date.issued | 2022-12 | |
dc.description | 198 pages | en_US |
dc.description.abstract | Large language models like GPT-3 are increasingly becoming part of human communication. Through writing suggestions, grammatical assistance, and machine translation, the models enable people to communicate more efficiently. Yet, we have a limited understanding of how integrating them into communication will change culture and society. For example, a language model that preferably generates a particular view may influence people's opinions when integrated into widely used applications. This dissertation empirically demonstrates that embedding large language models into human communication poses systemic societal risks. In a series of experiments, I show that humans cannot detect language produced by GPT-3, that using large language models in communication may undermine interpersonal trust, and that interactions with opinionated language models change users' attitudes. I introduce the concept of AI-Mediated Communication–where AI technologies modify, augment, or generate what people say–to theorize how the use of large language models in communication presents a paradigm shift from previous forms of computer-mediated communication. I conclude by discussing how my findings highlight the need to manage the risks of AI technologies like large language models in ways that are more systematic, democratic, and empirically grounded. | en_US |
dc.identifier.doi | https://doi.org/10.7298/pdqm-5n74 | |
dc.identifier.other | Jakesch_cornellgrad_0058_13353 | |
dc.identifier.other | http://dissertations.umi.com/cornellgrad:13353 | |
dc.identifier.uri | https://hdl.handle.net/1813/112933 | |
dc.language.iso | en | |
dc.subject | AI ethics | en_US |
dc.subject | Human-AI interaction | en_US |
dc.subject | Large language models | en_US |
dc.subject | Risk assessment | en_US |
dc.subject | Social influence | en_US |
dc.title | Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication | en_US |
dc.type | dissertation or thesis | en_US |
dcterms.license | https://hdl.handle.net/1813/59810.2 | |
thesis.degree.discipline | Information Science | |
thesis.degree.grantor | Cornell University | |
thesis.degree.level | Doctor of Philosophy | |
thesis.degree.name | Ph. D., Information Science |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Jakesch_cornellgrad_0058_13353.pdf
- Size:
- 3.33 MB
- Format:
- Adobe Portable Document Format