Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell University Graduate School
  3. Cornell Theses and Dissertations
  4. Accurate and Efficient Representation Learning on Large-Scale Graphs

Accurate and Efficient Representation Learning on Large-Scale Graphs

File(s)
Deng_cornellgrad_0058F_14185.pdf (5.57 MB)
Permanent Link(s)
https://doi.org/10.7298/p5jx-e308
https://hdl.handle.net/1813/115909
Collections
Cornell Theses and Dissertations
Author
Deng, Chenhui
Abstract

Recent years have witnessed a surge of interest in representation learning on graph-structured data, which are pervasive in diverse real-world domains, spanning fundamental physical interactions, biological systems, social networks, and logic circuits. However, there is a strong tension between the accuracy and efficiency of existing graph representation learning (GRL) models. Firstly, many real-world graphs (e.g. social networks and logic circuits) are notably large, comprising millions or even billions of nodes and edges, which poses a substantial challenge on model efficiency for training and deployment. Additionally, different domains have complex and diverse graph structures, which are also exposed to adversarial attacks that perturb graph topology via adding or removing nodes and edges. These graph properties necessitate sophisticated model architectures to produce high-quality graph representations, which often come with nontrivial computation cost and memory usage. In this dissertation, we present research solutions that cover scalable, hardware-friendly, expressive, and robust GRL models for accurate and efficient representation learning on large-scale graph applications. We firstly propose GraphZoom, a multi-level approach for fast and accurate GRL. GraphZoom leverages lightweight algorithms that iteratively reduce graph size for GRL and progressively refine graph representations with theoretical guarantees based upon spectral graph theory. This results in a 40.8x training speedup over prior arts with comparable or even better accuracy. Secondly, we develop HOGA for scalable and generalizable GRL on large-scale circuits, via adopting a hop-wise graph attention scheme. HOGA not only outperforms prior GRL models on challenging circuit problems, but is also friendly to distributed training by mitigating communication overhead caused by graph dependencies. This renders HOGA applicable to industrial-scale circuit applications. Thirdly, we introduce Polynormer, an expressive graph transformer model with linear complexity. We theoretically demonstrate the superior expressivity of Polynormer through the lens of polynomial functions. Our extensive experiments indicate that Polynormer not only achieves state-of-the-art results across a wide variety of mainstream graph datasets, but also outperforms a popular baseline model by 31.8% on an industrial-scale dataset provided by Google for predicting AI model runtime on TPU. Last but not least, we propose an efficient graph topology learning approach called GARNET for accurate GRL under graph adversarial attacks. We theoretically show that GARNET can effectively recover critical structures from an adversarial graph with nearly-linear complexity. This leads to an accuracy improvement of up to 10.23% over previous robust GRL models on a broad range of adversarial graphs.

Description
202 pages
Date Issued
2024-05
Keywords
Electronic Design Automation
•
Graph Representation Learning
•
Spectral Graph Theory
Committee Chair
Zhang, Zhiru
Committee Member
Delimitrou, Christina
De Sa, Christopher
Degree Discipline
Electrical and Computer Engineering
Degree Name
Ph. D., Electrical and Computer Engineering
Degree Level
Doctor of Philosophy
Rights
Attribution 4.0 International
Rights URI
https://creativecommons.org/licenses/by/4.0/
Type
dissertation or thesis
Link(s) to Catalog Record
https://newcatalog.library.cornell.edu/catalog/16575613

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance