dc.contributor.authorChen, Yang
dc.contributor.chairEasley, David
dc.contributor.committeeMemberDenti, Tommaso
dc.contributor.committeeMemberBlume, Lawrence Edward
dc.description216 pages
dc.description.abstractThe thesis consists of three essays that focus on learning under model uncertainty or relevant topics. They jointly investigate the problem of how individuals learn and make decisions when they do not perfectly understand how to interpret the information they have access to. The first essay, "Sequential Learning under Informational Ambiguity", introduces model uncertainty into the classic sequential social learning problem. One important phenomenon in the sequential social learning is information cascades. Past research has shown that the occurrence of a cascade depends on the details of people's data-generating processes, so it leaves an open question of whether cascades are prevalent in the social learning. In the essay, I re-examine the problem under the assumption that individuals are ambiguous about others' data-generating process and make decisions according to the max-min criterion. The main result of this research is that, under sufficient ambiguity, an information cascade occurs almost surely for all possible data-generating processes. More surprisingly, in many interesting situations, an arbitrarily small amount of ambiguity suffices to generate the results. This suggests that, relative to the presence of ambiguity, the standard literature has focused on a knife-edge case. The key contribution of this paper is to provide an alternative foundation for information cascades by interpreting them as a result of model uncertainty instead of the details of information structures. The second essay, “Biased Learning under Ambiguous Information”, proposes and characterizes a novel updating rule under model uncertainty. In this essay, an agent receives a sequence of signals, but he is ambiguous about the signal-generating process and perceives a set of feasible models for it. The agent is endowed with some biased states that he wishes to justify. After receiving a signal, the agent updates his belief according to the model that maximally supports the bias. This biased updating rule can accommodate interesting phenomena which are inconsistent with the Bayesian framework. For instance, the agent can exhibit the “good-news effect”; that is, he processes good news and bad news asymmetrically. This essay provides a complete characterization of limit beliefs under the biased updating rule. Using the characterization, the paper describes several effects of ambiguity on learning. First, ambiguity can lead to incomplete learning and polarization. Second, ambiguity can lead to overconfidence, and the overconfidence can persist even asymptotically. The third essay, “Naïve Social Learning with Heterogeneous Model Perceptions” studies an economy in which individuals are connected with each other through a social network and they can observe a sequence of signals and communicate beliefs with their neighbours repeatedly through some naïve rule. Previous research shows if all individuals understand the data-generating process correctly then complete learning can be achieved. This essay re-examines the problem under the assumption that some individuals may misinterpret their information. The formal results in this paper provide a characterization of limit beliefs. Using these results, I find that instead of achieving the wisdom of the crowds, society can suffer from group irrationality—even for some seemingly innocuous misperceptions, correct learning may not be achieved; moreover, individuals may end up forming a belief which is inconsistent with everyone’s information.
dc.subjectModel uncertainty
dc.typedissertation or thesis
dcterms.license University of Philosophy D., Economics


Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
1.55 MB
Adobe Portable Document Format