Khungurn, Pramook2017-07-072017-07-072017-05-30Khungurn_cornellgrad_0058F_10233http://dissertations.umi.com/cornellgrad:10233bibid: 9948819https://hdl.handle.net/1813/51596Fibers are ubiquitous in our visual world. Hair is an important part of our appearance, and we wear and use clothes made from various types of fibers. Computer graphics models that can accurately simulate light scattering in these materials have applications in the production of media such as movies and video games. They can also significantly lower the cost of textile design by allowing designers to design fabrics entirely in silico, render realistic images for feedback, and then fabricate final products that look exactly as designed. Recent research has shown that renderings of the highest quality---those showing realistic reflectance and complex geometric details---can be obtained by modeling individual fibers. However, this approach raises many open problems. For hair, the effect of fiber cross sections on light scattering behavior has never been carefully studied. For textiles, several competing approaches for fiber-level modeling exist, and it has been unclear which is the best. Furthermore, there has been no general procedure for matching textile models to real fabric appearance, and rendering such models requires considerable computing resources. In this dissertation, we present solutions to these open problems. Our first contribution is a light scattering model for human hair fibers that more accurately takes into account how light interacts with their elliptical cross sections. The model has been validated by a novel measurement device that captures light scattered from a single hair fiber much more efficiently than previous methods. Our second contribution is a general and powerful optimization framework for estimating parameters of a large class of appearance models from observations of real materials, which greatly simplifies development and testing of such models. We used the framework to systematically identify best practices in fabric modeling, including how to represent geometry and which light scattering model to use for textile fibers. Our third contribution is a fast, precomputation-based, GPU-friendly algorithm for approximately rendering fiber-level textile models under environment illumination. Using only a single commodity GPU, our implementation can render high-resolution, supersampled images of micron-resolution fabrics with multiple scattering in tens of seconds, compared to tens of core-hours required by CPU-based algorithms. Our algorithm makes fiber-level models practical for applications that require quick feedback, such as interactive textile design. We expect these contributions will make realistic physically-based virtual prototyping a reality.en-USAttribution-ShareAlike 4.0 InternationalComputer scienceComputer Graphicsappearance modelinglight transportphysical simulationreflectancerenderingModeling and Rendering Appearance of Hair and Textile Fibersdissertation or thesishttps://doi.org/10.7298/X4V40SBD