Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Multiarchitecture Hardware Acceleration of Hyperdimensional Computing Using oneAPI

Peitzsch, Ian (2024) Multiarchitecture Hardware Acceleration of Hyperdimensional Computing Using oneAPI. Master's Thesis, University of Pittsburgh. (Unpublished)

Download (367kB) | Preview


Hyperdimensional computing (HDC) is a machine-learning method that seeks to mimic the high-dimensional nature of data processing in the cerebellum. To achieve this goal, HDC represents data as large vectors, called hypervectors, and uses a set of well-defined operations to perform symbolic computations on these hypervectors. Using this paradigm, it is possible to create HDC models for classification tasks. These HDC models work by first transforming the input data into hypervectors, and then combining hypervectors of the same class to create a hypervector for representing that class. These HDC models can classify information by transforming new input data into hyperdimensional space, comparing the similarity between transformed data with each class, then classifying it based on which class has the highest similarity. Over the past few years, HDC models have greatly improved in accuracy and now compete with more common classification techniques for machine learning, such as neural networks. Additionally, manipulating hypervectors involves many repeated basic operations which are both easily parallelizable and pipelinable, making them easy to accelerate using different hardware platforms. This research seeks to exploit this ease of acceleration of HDC models and utilize oneAPI libraries with SYCL to create multiple accelerators for HDC learning tasks for CPUs, GPUs, and field-programmable gate arrays (FPGAs). The oneAPI tools are used in this research to accelerate single-pass learning, gradient-descent learning using the NeuralHD algorithm, and inference. Each of these tasks is benchmarked on the Intel Xeon Platinum 8256 CPU, Intel UHD 11th generation GPU, and Intel Stratix 10 FPGA. The GPU implementation showcased the fastest training times for single-pass training and NeuralHD training, with 0.89s and 126.55s, respectively. The FPGA implementation exhibited the fastest inference latency, with an average of 0.28ms.


Social Networking:
Share |


Item Type: University of Pittsburgh ETD
Status: Unpublished
CreatorsEmailPitt UsernameORCID
Peitzsch, Ianiap20@pitt.eduiap200009-0004-2012-9677
ETD Committee:
TitleMemberEmail AddressPitt UsernameORCID
Thesis AdvisorGeorge, Alan
Committee MemberZhou,
Committee MemberHu,
Date: 11 January 2024
Date Type: Publication
Defense Date: 17 April 2023
Approval Date: 11 January 2024
Submission Date: 19 April 2023
Access Restriction: No restriction; Release the ETD for access worldwide immediately.
Number of Pages: 24
Institution: University of Pittsburgh
Schools and Programs: Swanson School of Engineering > Computer Engineering
Degree: MS - Master of Science
Thesis Type: Master's Thesis
Refereed: Yes
Uncontrolled Keywords: Hyperdimensional computing, machine learning, FPGA, GPU, HLS
Date Deposited: 11 Jan 2024 19:26
Last Modified: 11 Jan 2024 19:26


Monthly Views for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item