Wang, Ke Alexander2020-08-102020-08-102020-05Wang_cornell_0058O_10901http://dissertations.umi.com/cornell:10901https://hdl.handle.net/1813/7028585 pagesIntelligent systems that interact with the physical world must be able to model the underlying dynamics accurately to be able to make informed actions and decisions. This requires accurate dynamics models that are scalable enough to learn from large amounts of data, robust enough to be used in the presence of noisy data or scarce data, and flexible enough to capture the true dynamics of arbitrary systems. Gaussian processes and neural networks each have desirable properties that make them potential models for this task, but they do not meet all of the above criteria -- Gaussians processes do not scale well computationally to large datasets, and current neural networks do not generalize well to complex physical systems. In this thesis, we present two methods that help address these shortcomings. First, we present a practical method to scale exact inference with Gaussian processes to over a million data points using GPU parallelism, a hundred times more than previous methods. In addition, our method outperforms other scalable Gaussian processes while maintaining similar or faster training times. We then present a method to lower the burden of learning physical systems for neural networks by representing constraints explicitly and using coordinate systems that simplify the functions that must be learned. Our method results in models that are a hundred times more accurate than competing baselines while maintaining a hundred times higher data efficiency.enAttribution 4.0 Internationalexact inferenceGaussian processhamiltonianlagrangianneural networksphysics priorsLarge Scale Exact Gaussian Processes Inference and Euclidean Constrained Neural Networks with Physics Priorsdissertation or thesishttps://doi.org/10.7298/gf31-q335