Show simple item record

dc.contributor.authorMarwil, Earl S.en_US
dc.date.accessioned2007-04-23T18:21:12Z
dc.date.available2007-04-23T18:21:12Z
dc.date.issued1978-02en_US
dc.identifier.citationhttp://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR78-335en_US
dc.identifier.urihttps://hdl.handle.net/1813/7455
dc.description.abstractThe basic problem considered here is to solve sparse systems of nonlinear equations. A system is considered to be sparse when the Jacobian has fewer than ten percent nonzero entries. Algorithms are presented and their convergence properties are analyzed. Schubert's method for solving sparse nonlinear equations is a modification of Broyden's method, a well known quasi-Newton method. The modification preserves the zero-nonzero structure defined by the sparse Jacobian in the sequence of approximate Jacobians. The algorithm is shown to satisfy the Bounded Deterioration Theorem of Broyden, Dennis and More to obtain a simple local convergence result. A more detailed study of the error bound in the Frobenfus norm shows that the method is superlinearly convergent. Schubert's method satisfies a minimum norm property and a Kantorovich analysis is also presented. A symmetric Schubert update is derived using an iterative projection technique of Dennis and Powell. The update is expressed in a closed form at the expense of an additional sparse linear system to be solved at each iteration in the algorithm. These sparse linear systems that occur all have the same structure which means they can be handled using the pre-processing performed when solving the first such system. Again the Frobenfus norm is used to estimate the error in the approximate Jacobian. The algorithm is locally and linearly convergent. The update is applicable to a symmetric nonlinear system, particularly those systems arising from unconstrained minimization problems with a continuous second derivative. Another method of solving sparse nonlinear equations using a matrix factorization is presented. Some theory of sparse linear equations is needed to maintain sparseness in triangular factors of a matrix. An approximate Jacobian is factored and one of the triangular factors is updated with Schubert's updating formula. After a finite number of iterations, a new approximate Jacobian is needed. Algorithms are suggested with local and superlinear convergence properties. The convergence depends on the local continuity of the matrix factorization. Finally, another application of Schubert's method is proposed for problems in which the Jacobian of the nonlinear system can be written as the direct sum of two transformations. One of these is assumed to be easily computed, and the other is assumed to be sparse. An algorithm is sketched and local and superlinear convergence properties are conjectured.en_US
dc.format.extent2249925 bytes
dc.format.extent854295 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/postscript
dc.language.isoen_USen_US
dc.publisherCornell Universityen_US
dc.subjectcomputer scienceen_US
dc.subjecttechnical reporten_US
dc.titleExploiting Sparsity in Newtown-Like Methodsen_US
dc.typetechnical reporten_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics