Visual Equivalence: A New Standard of Image Fidelity for Computer Graphics

Other Titles


Determining the visual fidelity of an image is a fundamental problem in computer graphics. When is an image good enough; i.e. when does it convey a convincing representation of a scene? Most graphics algorithms either aim to compute a physically accurate solution matching the real world, or they leave judgments of fidelity entirely up to the end user. The former is often computationally intractable, and the latter is ad-hoc since it cannot be generalized or predicted.

In this dissertation, we chart a new course between these two approaches. We propose visual equivalence, a new standard of image fidelity that focuses on what is visually important to the observer: the appearance of the scene, consisting of impressions of shapes, materials, and lighting. Under visual equivalence, an image with noticeable, pixel-by-pixel differences from a perfect reference can still be a high fidelity representation of the same scene, provided it conveys the same impression of scene appearance. This appearance-preserving standard is, to our knowledge, the first approach to image fidelity that permits judgments of this kind.

We present an end-to-end psychophysical and algorithmic investigation of visual equivalence, and its impact on scene modeling and rendering in computer graphics. For natural illumination, we measure the degree to which representations of lighting can be approximated or manipulated without affecting object appearance, and demonstrate how the resulting metrics can motivate new algorithms resulting in improved speedup and compression for rendering. For complex aggregate geometry, we investigate how different combinations of object shapes and colors affect appearance, and derive thresholds that can be used to reduce scene complexity. Finally, for texture, we describe how texture synthesis can be characterized in terms of visual equivalence, and present an efficient synthesis algorithm for a range of constrained synthesis applications, including the synthesis of visually equivalent detail to enhance low-resolution images.

This research takes some important first steps into a large new space in perceptually based rendering and modeling, which can address the challenges of future complex scenes in computer graphics.

Journal / Series

Volume & Issue



Date Issued




computer graphics; perception; visual equivalence; rendering; computer science; appearance


Effective Date

Expiration Date




Union Local


Number of Workers

Committee Chair

Committee Co-Chair

Committee Member

Degree Discipline

Degree Name

Degree Level

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)


Link(s) to Reference(s)

Previously Published As

Government Document




Other Identifiers


Rights URI


dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record