eCommons

 

Improving Memory And I/O Systems Through Foresight

Other Titles

Abstract

Traditionally, DRAM scheduling techniques have been optimized for performance. Only recently has there been a push for improving other optimization metrics, such as energy efficiency, power, or fairness. A multitude of scheduling algorithms have been proposed in the past few years for tackling these goals. But a major shortcoming in many of these techniques is that they are made up of inflexible, static hard-coded scheduling policies that lack the ability to learn and improve automatically with experience, or to reconfigure themselves to target a variety of such optimization metrics. Recently, Ipek et al. [32] proposed the use of reinforcement learning (RL) to design high-performance, self-optimizing memory schedulers. Reinforcement learning is a machine learning technique that learns automatically with experience, by interacting with the environment. It tries to pick the actions that maximize a desired long-term objective function. By using an online learning technique like RL, memory controllers have the capability of foresight and longterm planning, thereby enabling a non-greedy approach to scheduling. How ever, Ipek et al.'s methodology has a key limitation: it does not possess a generalizable way to target an objective function. In my thesis, we present a framework for designing a class of memory controllers that have the capability of managing multiple objective functions in a synergistic and coordinated fashion. MORSE (MultiObjective Reconfigurable Self-Optimizing Scheduler) is a systematic and general methodology to design reconfigurable DRAM schedulers following RL principles. Our framework also provides a way to reconfigure the scheduler on the field (post-silicon), whether at boot time or dynamically at run time, to accommodate changes to the optimization criteria. Beyond DRAM scheduling, we find that the storage technology landscape is rapidly undergoing many changes, primarily enabled by device scaling. In particular, DRAM is scaling in terms of density and frequency. High-density DRAM chips are becoming increasingly more common. As a result, memory systems are becoming more complex structurally. Due to this, a number of problems that were either non-existent or inconsequential in prior DRAM systems, have started surfacing. In particular, DRAM refresh overheads are on the rise. In the next part of my thesis, we investigate refresh overheads that are caused due to DRAM scaling. We propose simple scheduling techniques that help mitigate refresh stalls that occur in high density DDR4 memory systems. These techniques again involve the notion of foresight, by anticipating the patterns that lead to refresh stalls, and planning ahead of time to mitigate them. Scheduling refreshes is a real-time algorithm, and missing deadlines may lead to reliability concerns. Hence, this research initially focuses on simple prioritization techniques that do not require complex online learning to overcome refresh stalls. Over the past few years computer systems of all types have started integrating flash memory. The usage of NAND-flash based SSDs is becoming more widespread. As NAND based flash scales, flash memory's high density and low cost make it a viable option for desktop and high-end server environments. Just like DRAMs, there are a number of interrelated goals and metrics that need to be managed synergistically in the SSD domain as well. Therefore, in the fi- nal chapter of my thesis, we tackle the problem of improving scheduling in I/O systems by leveraging our RL based framework for designing self-optimizing schedulers. Current I/O controllers manage goals like write placement, garbage collection, and wear leveling individually. These schedulers also don't have the capability of online learning: they are fixed, static scheduling policies. Since NAND-flash characteristics are known to vary over time as the flash dies start wearing out, it is important to understand how these techniques correlate with each other. We adopt the principles of reinforcement learning to build selfoptimizing SSD controllers that have the capability of foresight and planning, and can synergistically manage multiple objective functions in I/O systems.

Journal / Series

Volume & Issue

Description

Sponsorship

Date Issued

2014-01-27

Publisher

Keywords

DRAMs; Scheduling; Performance; Power; Machine learning; RL; DDR4 Memory; Refresh; Flash; IOPS; Endurance

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Martinez, Jose F.

Committee Co-Chair

Committee Member

Lipson, Hod
Albonesi, David H.

Degree Discipline

Electrical Engineering

Degree Name

Ph. D., Electrical Engineering

Degree Level

Doctor of Philosophy

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Rights URI

Types

dissertation or thesis

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record