Wyllie, James C.2007-04-232007-04-231979-08http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR79-387https://hdl.handle.net/1813/7502Recent advances in microelectronics have brought closer to feasibility the construction of computers containing thousands (or more) of processing elements. This thesis addresses the question of effective utilization of such processing power. We study the computational complexity of synchronous paarallel computations using a model of computation based on random access machines operating in parallel and sharing a common memory, the P-RAM. Two main areas within the field of parallel computational complexity are investigated. First, we explore the power of the P-RAM model viewed as an abstract computing device. Later, we study techniques for developing efficient algorithms for parallel computers. We are able to give concise characterizations of the power of deterministic and nondeterministic P-RAMS in terms of the more widely known space and time complexity classes for multi-tape Turing machines. Roughly speaking, time-bounded deterministic P-RAMS are equivalent in power to (can accept the same sets as) space-bounded Turing machines, where the time and space bounds differ by at most a polynomial. In the context of comparing models of computation, we consider such polynomial differences in resources to be insignificant. Adding the feature of nondeterminism to the time-bounded P-RAM changes its power to that of a nondeterministic Turing machine with an exponentially higher running time. The later sections of the thesis examine algorithm design techniques for parallel computers. We first develop efficient procedures for some common operations on linked lists and arrays. Given this background, we introduce three techniques that permit the design of parallel algorithms that are efficient in terms of both their time and processor requirements. We illustrate the use of these techniques by presenting time and processor efficient algorithms for three problems, in each case improving upon the best previously known parallel resource bounds. We show how to compute minimum string edit distances, using the technique of pairwise function composition. We describe an algorithm for the off-line MIN that organizes its computation in the form of a complete binary tree. Finally, we present an algorithm for undirected graph connectivity that relies on redundancy in its representation of the input graph.5133249 bytes1584610 bytesapplication/pdfapplication/postscripten-UScomputer sciencetechnical reportThe Complexity of Parallel Computationstechnical report