Optimization and Parallelization of a Commodity Trade Model for the SP1, Using Parallel Programming Tools
Bergmark, Donna; Pottle, Marcia
We compare two different approaches to parallelization of Fortran programs. The first approach is to optimize the serial code so that it runs as fast as possible on a single processor, and then optimize the parallel version. In this paper a variety of parallel programming tools is used to obtain an optimal, parallel version of an economic policy modelling application for the IBM SP1. We apply a new technique called Data Access Normalization; we use an extended ParaScope as our parallel programming environment; we use FORGE 90 as our parallelizer; and we use KAP as our optimizer. We make a number of observations about the effectiveness of these tools. Both strategies obtain a working, parallel program, but use different tools to get there. On this occasion, both KAP and Data Access Normalization lead to the same critical transformation of inverting four of the twelve loop nests in the original program. The next most important optimization is parallel I/O, one of the few transformations that had to be done by hand. Speedups are obtained on the SP1 (using MPLp communication over the High Speed Switch).
theory center; multiprocessors; program transformations; parallel programming tools; data access normalization; ParaScope; Lambda Toolkit; Fortran; HPF; FORGE; SP1; SPMD; KAP; parallel I/O; PED LAMBDA; data parallel; loop distribution; loop fusion; trace analyzers
Previously Published As
Showing items related by title, author, creator and subject.
Fischer, Charles N. (Cornell University, 1975-04)Non-canonical generalization of several bottom-up parsing methods, including Simple Precedence, LR(k), SLR(k), and LALR(k) are considered. It is seen that these methods can readily be made to generate many concurrent ...
Li, Wei (Cornell University, 1994-11)A common feature of many scalable parallel machines is non-uniform memory access (NUMA) --- data access to local memory is much faster than to non-local memories. In addition, when a number of remote accesses must be made, ...
Gilbert, John R.; Hafsteinsson, Hjalmtyr (Cornell University, 1987-12)We describe a parallel algorithm for finding the Cholesky factorization of a sparse symmetric positive definite matrix A. The algorithm runs in $O(h \log n)$ time with $m\*$ processors, where $h$ is the height of A's ...