eCommons

 

DEEP UNSUPERVISED MODELS LEVERAGING LEARNING AND REASONING

dc.contributor.authorBai, Yiwei
dc.contributor.chairGomes, Carlaen_US
dc.contributor.committeeMemberSelman, Barten_US
dc.contributor.committeeMemberKuleshov, Volodymyren_US
dc.date.accessioned2024-01-31T21:18:30Z
dc.date.available2024-01-31T21:18:30Z
dc.date.issued2023-05
dc.description.abstractDeep learning has achieved tremendous success in many fields when leveraging well-labeled large-scale datasets. However, for many challenging tasks, data are scarce, or labels are difficult to acquire. We propose different approaches to tackle such tasks. We consider two settings: 1) Deep unsupervised models for Pattern Demixing and for Traveling Salesman Problems (TSP). 1.1) We propose a general unsupervised framework (deep reasoning networks, DRNets) for demixing and inferring crystal structure. DRNets seamlessly integrate reasoning about prior scientific knowledge into neural network optimization using an interpretable latent space. As a result, DRNets require only modest amounts of (unlabeled) data. DRNets reach super-human performance for crystal-structure phase mapping, a core, long-standing challenge in materials science, solving the previously unsolved Bi–Cu–V oxide phase diagram and aiding in the discovery of solar-fuels materials. DRNets are a general framework, and we illustrate that by also demonstrating their effectiveness to demix two 9x9 complete overlapping handwritten Sudokus. We boost the performance of the state-of-the-art framework DRNets with curriculum learning with restart DRNets (CLR-DRNets) on a visual Sudoku task and a visual Sudoku demixing task. This is an extension of our early work on DRNets for unsupervised pattern demixing. Furthermore, we consider combinatorial optimization problems, where well-labeled datasets are hard to acquire. 1.2) We use zero training overhead portfolios (ZTop) to boost the performance of deep models for combinatorial optimization problems by leveraging the randomness of the optimization process. 1.3) We use unsupervised learning combined with local search to solve a large-scale Traveling Salesman Problem (TSP). Apart from the deep unsupervised models, we also study the unsupervised optimization model for the dam portfolio selection problem. 2) We have developed methods to harness AI to reduce the adverse impacts of the hydropower dam proliferation in the Amazon River basin on people and nature (e.g., river fragmentation, which affects fish migration; transportation; sediment production; increase of greenhouse gases emissions; displacement of indigenous populations, etc.). More specifically, we propose methods to efficiently approximate high-dimensional Pareto frontiers for tree-structured networks using expansion and compression methods.en_US
dc.identifier.doihttps://doi.org/10.7298/3y80-bn53
dc.identifier.otherBai_cornellgrad_0058F_13530
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:13530
dc.identifier.urihttps://hdl.handle.net/1813/113985
dc.language.isoen
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/*
dc.subjectComputational Sustainabilityen_US
dc.subjectDeep Unsupervised Learningen_US
dc.subjectPrior Knowledgeen_US
dc.subjectReasoningen_US
dc.titleDEEP UNSUPERVISED MODELS LEVERAGING LEARNING AND REASONINGen_US
dc.typedissertation or thesisen_US
dcterms.licensehttps://hdl.handle.net/1813/59810.2
thesis.degree.disciplineComputer Science
thesis.degree.grantorCornell University
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Computer Science

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bai_cornellgrad_0058F_13530.pdf
Size:
12.44 MB
Format:
Adobe Portable Document Format