Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Cornell Computing and Information Science
  3. Computer Science
  4. Computer Science Technical Reports
  5. Issues in Ordered Multicast Performance: A Simulation Study

Issues in Ordered Multicast Performance: A Simulation Study

File(s)
95-1531.ps (11.43 MB)
95-1531.pdf (2.6 MB)
Permanent Link(s)
https://hdl.handle.net/1813/7188
Collections
Computer Science Technical Reports
Author
Kalantar, Michael
Abstract

Process groups are an increasingly popular tool for programming distributed systems. Such groups consist of collections of processes that work together to provide reliability, fault tolerance, task distribution, or some other abstraction. A combination of group actions, reliable failure detection, and ordered message delivery are used to provide group functionality. {\em Causal ordering}, one form of message ordering, is fundamental in process group systems. Its cost is therefore an important determinant of overall performance. In order to gain more insight into the behavior of causal order protocols, two simulations of process group systems were developed. The first, a detailed simulation of all system processes, groups, and interconnection networks, was used to study small systems. Using this model, a number of parameters were identified as having the greatest impact on ordered multicast performance. These parameters were used as indices to a set of precomputed probability distribution tables used by the second simulation. The second simulation focused only on key processes --- packet originators and processes in multiple overlapping groups --- and used precomputed probability tables to reduce simulation overhead. Simulations using this model showed that the delays imposed by the ordering protocols result in a tendency for packets to become "convoyed" together. Consequences are larger delays and greater system and network burstiness. We speculate that this tendency for systems to become more bursty, or less uniform, is a general principle. That is, any system which delays actions on a "microscopic" level results in burstiness at a "macroscopic" level. The more times such delays occur, the greater the degree of burstiness. Such a principle would imply that systems with this characteristic are limited in scale by the size of the largest burst that can be handled. It also suggests that protocols that minimize the number and length of delays and which optimize the handling of bursts should be used. In the context of process group systems, burstiness can be limited by minimizing the number of groups through which a packet is filtered and by protocol designs which minimize the number of delays used to control out-of-order message arrival.

Date Issued
1995-08
Publisher
Cornell University
Keywords
computer science
•
technical report
Previously Published as
http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cs/TR95-1531
Type
technical report

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright © 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance