Show simple item record

dc.contributor.authorSrivastava, Nitish Kumar
dc.date.accessioned2020-08-10T20:24:28Z
dc.date.issued2020-05
dc.identifier.otherSrivastava_cornellgrad_0058F_11904
dc.identifier.otherhttp://dissertations.umi.com/cornellgrad:11904
dc.identifier.urihttps://hdl.handle.net/1813/70449
dc.description140 pages
dc.description.abstractTensor algebra lives at the heart of big data applications. Where classical machine learning techniques such as embedding generation in recommender systems, dimensionality reduction and latent Dirichlet allocation make use of multi-dimensional tensor factorizations, deep learning techniques such as convolutional neural networks, recurrent neural networks and graph learning use tensor computations primarily in the form of matrix-matrix and matrix-vector multiplications. The tensor computations often used in many of these fields operate on sparse data where most of the elements are zeros. Traditionally, tensor computations have been performed on CPUs and GPUs, both of which have low energy-efficiency as they allocate excessive hardware resources to flexibly support various workloads. However, with the end of Moore's law and Dennard scaling, one can no longer expect more and faster transistors for the same dollar and power budget. This has led to an ever-growing need for energy-efficient and high-performance hardware that has resulted in a recent surge of interest in application-specific, domain-specific and behavior-specific accelerators, which sacrifice generality for higher performance and energy efficiency. In this dissertation, I explore hardware specialization for tensor computations by building programmable accelerators. A central theme in my dissertation is determining common spatial optimizations, computation and memory access patterns, and building efficient storage formats and hardware for tensor computations. First, I present T2S-Tensor, a language and compilation framework for productively generating high-performance systolic arrays for dense tensor computations. Then I present a versatile accelerator, Tensaurus, that can accelerate both dense and mixed sparse-dense tensor computations. Here, I also introduce a new sparse storage format that allows accessing sparse data in a vectorized and streaming fashion and thus achieves high memory bandwidth utilization for sparse tensor kernels. Finally, I present a novel sparse-sparse matrix multiplication accelerator, MatRaptor, designed using a row-wise product approach. I also show how these different hardware specialization techniques outperform CPUs, GPUs and state-of-the-art accelerators in both energy efficiency and performance.
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectMatrix Multiplication
dc.subjectSparse Tensor Computations
dc.subjectTensor Accelerator
dc.subjectTensor Decomposition
dc.titleDesign and Generation of Efficient Hardware Accelerators for Sparse and Dense Tensor Computations
dc.typedissertation or thesis
dc.description.embargo2021-06-08
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.levelDoctor of Philosophy
thesis.degree.namePh. D., Electrical and Computer Engineering
dc.contributor.chairAlbonesi, David H.
dc.contributor.chairZhang, Zhiru
dc.contributor.committeeMemberBatten, Christopher
dc.contributor.committeeMemberManohar, Rajit
dcterms.licensehttps://hdl.handle.net/1813/59810
dc.identifier.doihttps://doi.org/10.7298/5ksm-sm92


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Except where otherwise noted, this item's license is described as Attribution 4.0 International

Statistics