eCommons will be completely unavailable from 8:00am April 4 until 5:00pm April 5, 2018, for software upgrades. Thank you for your patience during this planned service interruption. Please contact us at firstname.lastname@example.org if you have questions or concerns.
Exploiting Sparsity in Newtown-Like Methods
Marwil, Earl S.
The basic problem considered here is to solve sparse systems of nonlinear equations. A system is considered to be sparse when the Jacobian has fewer than ten percent nonzero entries. Algorithms are presented and their convergence properties are analyzed. Schubert's method for solving sparse nonlinear equations is a modification of Broyden's method, a well known quasi-Newton method. The modification preserves the zero-nonzero structure defined by the sparse Jacobian in the sequence of approximate Jacobians. The algorithm is shown to satisfy the Bounded Deterioration Theorem of Broyden, Dennis and More to obtain a simple local convergence result. A more detailed study of the error bound in the Frobenfus norm shows that the method is superlinearly convergent. Schubert's method satisfies a minimum norm property and a Kantorovich analysis is also presented. A symmetric Schubert update is derived using an iterative projection technique of Dennis and Powell. The update is expressed in a closed form at the expense of an additional sparse linear system to be solved at each iteration in the algorithm. These sparse linear systems that occur all have the same structure which means they can be handled using the pre-processing performed when solving the first such system. Again the Frobenfus norm is used to estimate the error in the approximate Jacobian. The algorithm is locally and linearly convergent. The update is applicable to a symmetric nonlinear system, particularly those systems arising from unconstrained minimization problems with a continuous second derivative. Another method of solving sparse nonlinear equations using a matrix factorization is presented. Some theory of sparse linear equations is needed to maintain sparseness in triangular factors of a matrix. An approximate Jacobian is factored and one of the triangular factors is updated with Schubert's updating formula. After a finite number of iterations, a new approximate Jacobian is needed. Algorithms are suggested with local and superlinear convergence properties. The convergence depends on the local continuity of the matrix factorization. Finally, another application of Schubert's method is proposed for problems in which the Jacobian of the nonlinear system can be written as the direct sum of two transformations. One of these is assumed to be easily computed, and the other is assumed to be sparse. An algorithm is sketched and local and superlinear convergence properties are conjectured.
computer science; technical report
Previously Published As