3D CONTENT CREATION FROM IMAGES
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
We immerse ourselves in a wonderful world that is three-dimensional. However, dimensionality is reduced from 3D to 2D, when we visually capture the world with the ubiquitous cameras. The loss of this single dimensionality restricts the perspective from which a viewer could parse and explore the captured world. Hence, it is of particular interest to recover the lost dimensionality from 2D images in order to facilitate applications such as 3D cartography, virtual/augmented reality, and robotics navigation. The problem of reconstructing 3D scenes from 2D images has been traditionally approached by photogrammetry pipelines. Such pipelines typically consist of modularized components that are non-differentiable in nature. Modern advances have been mostly driven by building alternative differentiable neural rendering pipelines that exploit the power of machine learning. In this dissertation, we first present a method that seamlessly retargets photogrammetry pipelines developed for ground-level photos to satellite imageries. We then show a neural rendering approach that faithfully reconstructs large-scale object-centric scenes in 3D. The 3D contents created with these approaches, however, are fundamentally limited in that they bake in capture-time lighting and dis-allow 3D composition under new virtual illumination. We tackle this limitation by considering two capture lighting setups: one with natural environmental lighting, the other with collocated flashlight. Finally, we show an attempt at creating high-quality artistic 3D contents in addition to photorealistic ones.
Journal / Series
Volume & Issue
Description
Sponsorship
Date Issued
Publisher
Keywords
Location
Effective Date
Expiration Date
Sector
Employer
Union
Union Local
NAICS
Number of Workers
Committee Chair
Committee Co-Chair
Committee Member
Belongie, Serge