DEEP UNSUPERVISED MODELS LEVERAGING LEARNING AND REASONING
dc.contributor.author | Bai, Yiwei | |
dc.contributor.chair | Gomes, Carla | en_US |
dc.contributor.committeeMember | Selman, Bart | en_US |
dc.contributor.committeeMember | Kuleshov, Volodymyr | en_US |
dc.date.accessioned | 2024-01-31T21:18:30Z | |
dc.date.available | 2024-01-31T21:18:30Z | |
dc.date.issued | 2023-05 | |
dc.description.abstract | Deep learning has achieved tremendous success in many fields when leveraging well-labeled large-scale datasets. However, for many challenging tasks, data are scarce, or labels are difficult to acquire. We propose different approaches to tackle such tasks. We consider two settings: 1) Deep unsupervised models for Pattern Demixing and for Traveling Salesman Problems (TSP). 1.1) We propose a general unsupervised framework (deep reasoning networks, DRNets) for demixing and inferring crystal structure. DRNets seamlessly integrate reasoning about prior scientific knowledge into neural network optimization using an interpretable latent space. As a result, DRNets require only modest amounts of (unlabeled) data. DRNets reach super-human performance for crystal-structure phase mapping, a core, long-standing challenge in materials science, solving the previously unsolved Bi–Cu–V oxide phase diagram and aiding in the discovery of solar-fuels materials. DRNets are a general framework, and we illustrate that by also demonstrating their effectiveness to demix two 9x9 complete overlapping handwritten Sudokus. We boost the performance of the state-of-the-art framework DRNets with curriculum learning with restart DRNets (CLR-DRNets) on a visual Sudoku task and a visual Sudoku demixing task. This is an extension of our early work on DRNets for unsupervised pattern demixing. Furthermore, we consider combinatorial optimization problems, where well-labeled datasets are hard to acquire. 1.2) We use zero training overhead portfolios (ZTop) to boost the performance of deep models for combinatorial optimization problems by leveraging the randomness of the optimization process. 1.3) We use unsupervised learning combined with local search to solve a large-scale Traveling Salesman Problem (TSP). Apart from the deep unsupervised models, we also study the unsupervised optimization model for the dam portfolio selection problem. 2) We have developed methods to harness AI to reduce the adverse impacts of the hydropower dam proliferation in the Amazon River basin on people and nature (e.g., river fragmentation, which affects fish migration; transportation; sediment production; increase of greenhouse gases emissions; displacement of indigenous populations, etc.). More specifically, we propose methods to efficiently approximate high-dimensional Pareto frontiers for tree-structured networks using expansion and compression methods. | en_US |
dc.identifier.doi | https://doi.org/10.7298/3y80-bn53 | |
dc.identifier.other | Bai_cornellgrad_0058F_13530 | |
dc.identifier.other | http://dissertations.umi.com/cornellgrad:13530 | |
dc.identifier.uri | https://hdl.handle.net/1813/113985 | |
dc.language.iso | en | |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International | * |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0/ | * |
dc.subject | Computational Sustainability | en_US |
dc.subject | Deep Unsupervised Learning | en_US |
dc.subject | Prior Knowledge | en_US |
dc.subject | Reasoning | en_US |
dc.title | DEEP UNSUPERVISED MODELS LEVERAGING LEARNING AND REASONING | en_US |
dc.type | dissertation or thesis | en_US |
dcterms.license | https://hdl.handle.net/1813/59810.2 | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Cornell University | |
thesis.degree.level | Doctor of Philosophy | |
thesis.degree.name | Ph. D., Computer Science |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Bai_cornellgrad_0058F_13530.pdf
- Size:
- 12.44 MB
- Format:
- Adobe Portable Document Format