Bruck, JehoshuaDolev, DannyHo, Ching-TienRosu, Marcel-CatalinStrong, Ray2007-04-232007-04-231995-02http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR95-1474https://hdl.handle.net/1813/6083Parallel computing on clusters of workstations and personal computers has very high potential, since it leverages existing hardware and software. Parallel programming environments offer the user a convenient way to express parallel computation and communication. In fact, recently, a Message Passing Interface (MPI) has been proposed as an industrial standard for writing "portable" message-passing parallel programs. The communication part of MPI consists of the usual point-to-point communication as well as collective communication. However, existing implementations of programming environments for clusters are built on top of a point-to-point communication layer (send and receive) over local area networks (LANs) and, as a result, suffer from poor performance in the collective communication part. In this paper, we present an efficient design and implementation of the collective communication part in MPI that is optimized for clusters of workstations. Our system consists of two main components: the MPI-CCL layer that includes the collective communication functionality of MPI and a User-level Reliable Transport Protocol (URTP) that interfaces with the LAN Data-link layer and leverages the fact that the LAN is a broadcast medium. Our system is integrated with the operating system via an efficient kernel extension mechanism that we developed. The kernel extension significantly improves the performance of our implementation as it can handle part of the communication overhead without involving user space. We have implemented our system on a collection of IBM RS/6000 workstations connected via a 10Mbit Ethernet LAN. Our performance measurements are taken from real scientific applications that run in a parallel mode by means of the MPI. The hypothesis behind our design is that system's performance will be bounded by interactions between the kernel and user space rather than by the bandwidth delivered by the LAN Data-Link Layer. Our results indicate that the performance of our MPI Broadcast (on top of Ethernet) is about twice as fast as a recently published software implementation of broadcast on top of ATM.336028 bytes395776 bytes10 bytesapplication/pdfapplication/postscriptapplication/postscripten-UScomputer sciencetechnical reportEfficient Message Passing Interface (MPI) for Parallel Computing onClusters of Workstationstechnical report