eCommons

 

Computing and Information Science Technical Reports

Permanent URI for this collection

This is a collection of technical reports from the Cornell's Computing and Information Science (CIS) Department from the time period of 2003-present. These reports are part of the NCSTRL collection of Computer Science Technical Reports.

For reports prior to 2003, see the Computer Science Technical Reports Collection.

Browse

Recent Submissions

Now showing 1 - 10 of 307
  • Item
    Abstraction-Safe Effect Handlers via Tunneling: Technical Report
    Zhang, Yizhou (2018-11-09)
    Algebraic effect handlers offer a unified approach to expressing control-flow transfer idioms such as exceptions, iteration, and async/await. Unfortunately, previous attempts to make these handlers type-safe have failed to support the fundamental principle of modular reasoning for higher-order abstractions. We demonstrate that abstraction-safe algebraic effect handlers are possible by giving them a new semantics. The key insight is that code should only handle effects it is aware of. In our approach, the type system guarantees all effects are handled, but it is impossible for higher-order, effect-polymorphic code to accidentally handle effects raised by functions passed in; such effects tunnel through the higher-order, calling procedures polymorphic to them. By contrast, the possibility of accidental handling threatens previous designs for algebraic effect handlers. We prove that our design is not only type-safe, but also abstraction-safe. Using a logical-relations model that we prove sound with respect to contextual equivalence, we derive previously unattainable program equivalence results. Our mechanism offers a viable approach for future language designs aiming for effect handlers with strong abstraction guarantees.
  • Item
    A Reactive Approach for Use-Based Privacy
    Birrell, Eleanor; Schneider, Fred B. (2017-11-24)
    Use-based privacy views privacy in terms of authorized uses, a philosophy well-suited for data collection and data analysis applications that arise in networked information systems. This work takes a first step toward investigating the technical feasibility of use-based privacy. We identify requirements for a feasible, expressive, use-based privacy regime. A user-study with 100 Amazon Turkers gives evidence for the validity of these requirements. And the approach is instantiated through avenance policies, whose expressiveness is assessed by formulating HIPAA and the Facebook privacy policies.
  • Item
    Videos of demo of self-driving robot with map verification
    Liu, Jed; Corbett-Davies, Joseph; Ferraiuolo, Andrew; Campbell, Mark; Myers, Andrew C.; Suh, G. Edward (2017-10-06)
    Three videos of a self-driving Segway system are shown, following an oval track. Delivering the system an incorrect map causes the system to drive off the track. The third video shows the system running with software that performs map verification and falls back to a safe planner when verification fails. The software is implemented in Jif, which ensures that untrusted maps are not used without verification.
  • Item
    Pushing Bytes: Cloud-Scale Data Replication with RDMC
    Behrens, Jonathan; Jha, Sagar; Tremel, Edward; Birman, Ken (2017-06)
    Data center infrastructures frequently replicate objects to create backups or to copy executables and input files to compute nodes. This task occurs under time pressure: data is at risk of loss until replicated for fault-tolerance, and in the case of parallel processing systems like Spark, useful computation can't start until the nodes all have a copy of the executable images. Cloud elasticity creates a similar need to rapidly copy executables and their inputs. To address these needs, we introduce RDMC: a fast reliable data replication protocol that implements multicast as a pattern of RDMA unicast operations, which maximizes concurrency while minimizing unnecessary transfers. RDMC can be used in any setting that has RDMA or a software RDMA emulation. Our focus is on use of replication as an element of the data center infrastructure. We evaluate overheads for the hardware-supported case using microbenchmarks and heavy-load experiments, and also describe preliminary experiments using a technique that offloads the entire data transfer pattern into the NICs, further reducing latency while freeing server resources for other tasks.
  • Item
    Shoal: A Lossless Network for High-density and Disaggregated Racks
    Shrivastav, Vishal; Valadarsky, Asaf; Ballani, Hitesh; Costa, Paolo; Lee, Ki Suh; Wang, Han; Agarwal, Rachit; Weatherspoon, Hakim (2017-04-30)
    Rack-scale computers comprise hundreds of micro-servers connected to internal storage and memory through an internal network. However, their density and disaggregated nature pose a problem for existing packet-switched networks: they are too costly, draw too much power, and the network latency is too high for converged traffic (comprising IP, storage, and memory traffic). We propose Shoal, a rack-scale network that tightly integrates a circuit-switched physical fabric with the nodes’ network stack to efficiently support converged traffic. Shoal’s fabric comprises circuit switches with no buffers, no arbitration, and no packet inspection mechanism. Micro-servers transmit according to a static schedule such that there is no in-network contention. Shoal’s congestion control leverages the physical fabric to achieve fairness, losslessness, and both bounded worst-case throughput and queuing. We use an FPGA-based prototype and simulations to illustrate Shoal’s mechanisms are practical and achieve low latency within the rack at low cost and power.
  • Item
    Full-Processor Timing Channel Protection with Applications to Secure Hardware Compartments
    Ferraiuolo, Andrew; Wang, Yao; Xu, Rui; Zhang, Danfeng; Myers, Andrew; Suh, Edward (2017-04-25)
    This paper presents timing compartments, a hardware architecture abstraction that eliminates microarchitectural timing channels between groups of processes of VMs running on shared hardware. When coupled with conventional access controls, timing compartments provide strong isolation comparable to running software entities on separate machines. Timing compartments use microarchitecture mechanisms to enforce timing sensitive noninterference, which we prove formally through information flow analysis of an RTL implementation. In the process of systematically removing timing interference, we identify and remove new sources of timing channels, including cache coherence mechanisms and module interfaces, and introduce new performance optimizations. We also demonstrate how timing compartments may be extended to support a hardware-only TCB which ensures security even when the system is managed by an untrusted OS or hypervisor. The overheads of timing compartments are low; compared to a comparable insecure baseline, executing two timing compartments reduces system throughput by less than 7% on average and by less than 2% for compute-bound workloads.
  • Item
    Unrestricted Stone Duality for Markov Processes
    Furber, Robert; Kozen, Dexter; Larsen, Kim; Mardare, Radu; Panangaden, Prakash (2017-02-23)
    Stone duality relates logic, in the form of Boolean algebra, to spaces. Stone-type dualities abound in computer science and have been of great use in understanding the relationship between computational models and the languages used to reason about them. Recent work on probabilistic processes has established a Stone-type duality for a restricted class of Markov processes. The dual category was a new notion--Aumann algebras--which are Boolean algebras equipped with countable family of modalities indexed by rational probabilities. In this article we consider an alternative definition of Aumann algebra that leads to dual adjunction for Markov processes that is a duality for many measurable spaces occurring in practice. This extends a duality for measurable spaces due to Sikorski. In particular, we do not require that the probabilistic modalities preserve a distinguished base of clopen sets, nor that morphisms of Markov processes do so. The extra generality allows us to give a perspicuous definition of event bisimulation on Aumann algebras.
  • Item
    Election Verifiability: Cryptographic Definitions and an Analysis of Helios and JCJ
    Smyth, Ben; Frink, Steven; Clarkson, Michael R. (2017-02-13)
    Definitions of election verifiability in the computational model of cryptography are proposed. The definitions formalize notions of voters verifying their own votes, auditors verifying the tally of votes, and auditors verifying that only eligible voters vote. The Helios (Adida et al., 2009) and JCJ (Juels et al., 2010) election schemes are analyzed using these definitions. Helios 4.0 satisfies the definitions, but Helios 2.0 does not because of previously known attacks. JCJ does not satisfy the definitions because of a trust assumption it makes, but it does satisfy a weakened definition. Two previous definitions of verifiability (Juels et al., 2010; Cortier et al., 2014) are shown to permit election schemes vulnerable to attacks, whereas the new definitions prohibit those schemes.
  • Item
    Mixing Consistency in Geodistributed Transactions: Technical Report
    Milano, Mae P.; Myers, Andrew C. (2016-10-07)
    Weakly consistent data stores have become popular because they enable highly available, scalable distributed applications. However, some data needs strong consistency. For applications that mix accesses to strongly and weakly consistent data, programmers must currently choose between bad performance and possible data corruption. We instead introduce a safe mixed-consistency programming model in which programmers choose the consistency level on a per-object basis. Further, they can use atomic, isolated transactions to access both strongly consistent (e.g., linearizable) data and weakly consistent (e.g., causally consistent) data within the same transaction. Compile-time checking ensures that mixing consistency levels is safe: the guarantees of each object's consistency level are enforced. Programmers avoid being locked into one consistency level; they can make an application-specific tradeoff between performance and consistency. We have implemented this programming model as part of a new system called MyriaStore. MyriaStore demonstrates that safe mixed consistency can be implemented on top of off-the-shelf data stores with their own native, distinct consistency guarantees. Our performance measurements demonstrate that significant performance improvements can be obtained for geodistributed applications that need strong consistency for some critical operations but that also need the high performance and low latency possible with causally consistent data.
  • Item
    Safe Serializable Secure Scheduling: Transactions and the Trade-Off Between Security and Consistency (Technical Report)
    Sheff, Isaac; Magrino, Tom; Liu, Jed; Myers, Andrew C.; van Renesse, Robbert (2016-08-16)
    Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce staged commit, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.