Show simple item record

dc.contributor.authorSchnabel, Robert B.en_US
dc.date.accessioned2007-04-23T16:40:50Z
dc.date.available2007-04-23T16:40:50Z
dc.date.issued1976-08en_US
dc.identifier.citationhttp://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR76-288en_US
dc.identifier.urihttps://hdl.handle.net/1813/6285
dc.description.abstractDavidson's new quasi-Newton optimization algorithm selects the new inverse Hessian approximation H at each step to be the "optimally conditioned" member of a certain one-parameter class of rank two updates to the last inverse Hessian approximation H. In this paper, we show that virtually the same goals of conditioning can be achieved while restricting H to the convex class of updates. We therefore suggest that Davidson's algorithms using optimal conditioning, restrict the choice of H to members of the convex class.en_US
dc.format.extent1249743 bytes
dc.format.extent551832 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/postscript
dc.language.isoen_USen_US
dc.publisherCornell Universityen_US
dc.subjectcomputer scienceen_US
dc.subjecttechnical reporten_US
dc.titleOptimal Conditioning in the Convex Class of Rank Two Updatesen_US
dc.typetechnical reporten_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Statistics