JavaScript is disabled for your browser. Some features of this site may not work without it.

eCommons will become read-only at noon on May 26 for an infrastructure update. Submissions will not be accepted at this time. We anticipate that this update will be completed by June 2 at 5 p.m.

Please contact us at ecommons-admin@cornell.edu if you have questions or concerns.

## A Learning Theoretic Approach To Algorithmic Game Theory

dc.contributor.author | Piliouras, Georgios | en_US |

dc.date.accessioned | 2010-10-20T20:06:52Z | |

dc.date.available | 2015-10-20T06:57:23Z | |

dc.date.issued | 2010-10-20 | |

dc.identifier.other | bibid: 7061526 | |

dc.identifier.uri | https://hdl.handle.net/1813/17719 | |

dc.description.abstract | Algorithmic game theory attempts to mathematically capture behavior in strategic situations, in which an individual's success depends on the choices of others. Research in this area tends to focus on one of the following challenges: 1) Price of Anarchy: Characterize the inefficiency of equilibria vs the global optimum. 2) Computational Complexity of Equilibria 3) Algorithmic Mechanism Design: Design games with desirable properties that are efficiently implementable. Many strategic interactions are in their nature recurrent (e.g. financial markets) with the agents participating in them repeatedly. These agents learn over time to adapt to their environment as is defined by the game and the dynamic behavior of the other agents[96]. We show that incorporating the assumption of adaptive agents can lead to exciting new insights to long standing questions in all areas of algorithmic game theory. A Learning Theoretic Refinement of the Price of Anarchy: It is well understood that competitive environments can result in losses to social welfare due to the lack of a central coordinator who could enforce the global optimum solution. Defined as the ratio of the cost of the worst possible (Nash) equilibrium over the cost of the optimum, price of anarchy (and its variants) are commonly used metrics for these inefficiencies. However, such worst case analysis can be rather misleading as worst case equilibria may never arise in practice. Can natural learning algorithms beat the price of anarchy? We show that in the class of atomic congestion games natural learning algorithms can indeed learn to navigate away from such worst case equilibria. In some cases the implied performance bounds are exponentially better than the previously known worst case guarantees. Furthermore, such positive performance results are shown to be robust even when we impose restrictions on availability of accurate up-to-date information on which agents can base their decisions. Learning Inspired Equilibria and Computation: The plausibility of the Nash equilibrium, as a universal solution concept has been under attack, due to recent negative complexity results (e.g. for normal form games with constant number of players[27, 24]). To make matters worse, there exist games of constant overall size, for which Nash equilibria are unstable for most reasonable dynamics[88]. In this section we will look into learning inspired solution concepts such as strong CURB (Closed Under Rational Behavior) and CUBR (Closed Under weakly Better Replies) sets. A strong CURB (CUBR) set is a cartesian product of pure strategy sets such that each player's component contains all best (weakly better) responses to itself given any joint probability distributions of its opponents over the set. A strong CURB (CUBR) set is said to be minimal if it does not contain any proper subset that is also a strong CURB (CUBR) set. Such minimal sets exist in all finite games and are asymptotically stable for a great number of natural learning dynamics. Furthermore, we prove that we can also compute all minimal strong CURB (as well as CUBR) sets for any normal form game with a constant number of players in polynomial time. Mechanism Evolution: The goal of mechanism design is to design games that are uniquely solvable by reasonably self-interested players and which lead to socially optimal outcomes. Such guarantees are of critical importance in the case of economic interactions which are key revenue producers, such as ad-auctions. However, in many cases of social and economic interactions, centralized design is impractical and sometimes even undesirable (i.e. internet). Nevertheless, such mechanisms seem to be working fairly well in practise. How is that possible? Most mechanisms that we participate in are not the product of "intelligent design". Instead mechanisms evolve. All games (mechanisms) that we participate in are in essence social contracts. The rules of these interactions change over time as the result of actions of the same individuals who participate in them. These changes are not random. On the contrary the involved agents are trying to bring on changes to the structure of the game, so as to improve their experience from participating in the game. We formalize these intuitions and we analyze their implications in the context of oligopolistic markets. Specifically, we allow players one extra dimension in the pursuit of their strategic goals, the possibility of forming coalitions with other agents. We focus on the class of linear and symmetric Cournot games and we study the nature of stable coalition structures. We show than even the worst stable coalition can lead to a significant increase in market prices and profits for the firms. | en_US |

dc.language.iso | en_US | en_US |

dc.title | A Learning Theoretic Approach To Algorithmic Game Theory | en_US |

dc.type | dissertation or thesis | en_US |