Spatial computing of sound fields in virtual environment
Over the past decades, computer graphics researchers have put enormous effort into rendering realistic visual scenes by simulating light transport. With the high-level goal of creating realistic immersive experiences in virtual worlds, physically plausible sound is a critical piece and remains to be explored. Humans’ experience when perceiving sound is spatially varying and scene dependent, e.g. whether a point sound source is occluded or not with respect to listener will lead to a different perceived sound. Simulating sound propagation is the key to reproducing such effects but it differs from light transport simulation in visual rendering due to the importance of diffraction effects. By using physical simulations, the grand goal of this thesis is to provide auditory cues that respect the influence of the virtual environment. We address this problem by precomputing an expensive simulation of sound wave propagation through a voxelized 3D scene and encoding perceptually important acoustic parameters per voxel from the simulation data, which enables efficient real-time sound rendering at run-time. All methods proposed are immediately practical with potential applications in AR/VR and gaming. Our first contribution is proposing a framework that simulates ambient sound propagation in a preprocessing stage and reconstructs ambient sound efficiently at render time. By modelling spatio-temporally incoherent ambient sound source appropriately in numerical simulation, a streaming encoder captures the loudness and directivity per listener position compactly in spherical harmonics coefficients. The encoded coefficients are further coupled with Head-related transfer function (HRTF) data, rendering physically plausible binaural ambient sound at run-time. We then observe that in most ambient sound scenarios, the sound texture perceived varies in space. For example, near a water stream, crisp water bubble sounds are audible with transient details, whereas it becomes closer to a randomized colored noise in far field. A more compelling example is a babbling crowd, in which individual speech is recognizable next to a person but not when far away from the crowd. The intuition is that the perceived ambient sound is a random collection of similar micro sound events and the variation of atomic sound events’ temporal density and their distribution of amplitudes leads to different sound textures. We propose a simple ambient sound texture representation in terms of an event density function (EDF). By modelling micro sound events directly in the precomputed simulation phase, EDF is compactly encoded. At run-time, sound is rendered by real-time granular synthesis, resulting in a spatially varying sound texture that enhances the experience of ambient sound in a virtual environment.
Computer engineering; Simulation; Computational acoustics; signal processing; Computer science; Computer Graphics; Sound
Marschner, Stephen Robert
Molnar, Alyosha Christopher; Damle, Anil Sanjiv
Electrical and Computer Engineering
Ph.D., Electrical and Computer Engineering
Doctor of Philosophy
dissertation or thesis
Showing items related by title, author, creator and subject.
Rudan, John W. (2004-01-29)John W. Rudan, Director Emeritus of the Office of Information Technologies at Cornell University, describes the development of computing at Cornell, from the earliest punchcard tabulating equipment used in the 1920s to the ...
Engineering: Cornell Quarterly, Vol.20, No.2 (Autumn 1985): Twenty Years of Computer Science at Cornell Gries, David; Teitelbaum, Tim; Reps, Thomas; Schneider, Fred B.; Babaoglu, Ozalp; Birman, Kenneth P.; Toueg, Sam; Krafft, Dean B.; Solworth, Jon A.; Duke, Diane; Fish, Michele (Internet-First University Press, 1985)IN THIS ISSUE: Twenty Years of Computer Science at Cornell /2 David Gries ... Immediate Computation or How to Keep a Personal Computer Busy /12 Tim Teitelbaum and Thomas Reps ... Reaching Agreement: A Fundamental Task Even ...
Handler, Philip; Handler, Maddy (Fly on the Wall Productions, 2005-06)This video featuring Don Greenberg '55, Cornell's Jacob Gould Schurman Professor and creator and Director of the Program of Computer Graphics, was produced in honor of his 50th Reunion. Don's True Big Red story unfolds ...