dc.contributor.author Panconesi, Alessandro en_US dc.date.accessioned 2007-04-23T16:30:00Z dc.date.available 2007-04-23T16:30:00Z dc.date.issued 1993-06 en_US dc.identifier.citation http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR93-1356 en_US dc.identifier.uri https://hdl.handle.net/1813/6126 dc.description.abstract The topic of this thesis is the issue of locality in distributed computing. A first set of results in this thesis concerns the $\Delta$-vertex coloring problem. They are all based on the following result. Let $G$ be a graph such that $\Delta \geq 3, G$ is not a complete graph, and such that all of $G$ except one vertex $v$ is $\Delta$-colored. Then, it is possible to extend this $\Delta$-coloring to all of $G$ by recoloring a path originating from $v$ of length at most $O(\log_{\Delta}n)$. This property allows us to develop several efficient algorithms for $\Delta$-coloring. In particular, but not exclusively, in the distributed and PRAM models of computation. It also implies a well-known result of Brooks as a corollary. Another set of results concerns network decomposition - a basic notion in distributed graph algorithms. We improve the bounds for computing a network decomposition distributively and deterministically. Our algorithm computes an $(n^{\epsilon(n)},n^{\epsilon(n)})$-decomposition in $O(n^{\epsilon(n)})$ time, where $\epsilon(n) = O(1/ \sqrt{\log n})$. We also show that the class of graphs $\cal G$ whose maximum degree is $O(n^{\delta(n)}$, where $\delta(n) = O(1/\log \log n)$, is complete for the task of computing a ($\log n, \log n)$-decomposition, in polylogarithmic in $n$ time. Completeness is to be intended in the following sense: if we have an algorithm $\cal A$ that computes a $(\log n, \log n)$-decomposition in polylogarithmic in $n$ time for graphs in $\cal G$, then we can compute a $(\log n, \log n)$-decomposition in polylogarithmic in $n$ time for all graphs. A last set of results concerns the edge coloring problem. We give a randomized distributed algorithm to compute an edge coloring of a given network $G$, that uses at most $1.6\Delta + \log^{2+\delta} n$ colors, for any $\delta greater than 0$. The running time is $O(\log n)$ and the failure probability is at most $\epsilon$, for any fixed $\epsilon greater than 0$. The algorithm is quite simple but requires an interesting probabilistic analysis. At the core of the analysis is an extension of the Chernoff-Hoeffding bounds, which are fundamental tools used in estimating the tail probabilities of the sum of Bernoulli-like trials. en_US dc.format.extent 7467530 bytes dc.format.extent 1394941 bytes dc.format.mimetype application/pdf dc.format.mimetype application/postscript dc.language.iso en_US en_US dc.publisher Cornell University en_US dc.subject computer science en_US dc.subject technical report en_US dc.title Locality in Distributed Computing en_US dc.type technical report en_US
﻿