CMU-CS-24-143
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-24-143

Mechanisms for Efficient Cache Access
in Near-Cache Accelerators

Piratach Yoovidhya

M.S. Thesis

August 2024

CMU-CS-24-143.pdf


Keywords: Near-cache computing, Predictor, Data-centric, Cache-coherence

In traditional computer systems, data has to move through the memory hierarchy for computation to take place within the core. This data movement cost has been dominating computer systems' performance, and will only get worse over time. Many proposals address this problem by introducing architectures that move compute closer to data.

Like some of these proposals, our approach to this places engines within the cache hierarchy, allowing the core to offload work to the caches. When the engine experiences a cache miss, the requested data could be residing at two different levels within the memory hierarchy. Sending a request to only one of the two locations could result in a miss, increasing the miss latency of the engine. However, sending a request to both locations at once also leads to higher energy consumption.

In this thesis, we introduce a novel Memory Access Predictor to the system that assists the engine in sending requests that minimizes energy usage, while retaining high performance. We evaluate the predictor on various micro-benchmarks, showing that it is able to improve the performance by 15%, and reduces additional energy consumption by 91% in other applications.

42 pages

Thesis Committee:
Nathan Beckmann (Chair)
Phillip Gibbons

Srinivasan Seshan, Head, Computer Science Department
Martial Hebert, Dean, School of Computer Science


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu