Home > > Seminar by Saurabh Gupta

Seminar by Saurabh Gupta

Seminar by Saurabh Gupta

Locality-Driven Cache Partitioning: A Case for Leveraging Spatial Locality for Effective Cache Sharing

Saurabh Gupta
North Carolina State University

    Date:    Wednesday, December 5th, 2012
    Time:    4PM
    Venue:   CS101.

Abstract:

In multi-core architecture, last-level caches (LLCs) are typically shared among multiple cores. Previous works have shown that such sharing can be beneficial as different workloads have different needs for cache capacity. What is missing in previous works on partitioning shared LLCs, however, is that the heterogeneity in spatial locality among workloads has not been explored. In other words, all the cores use the same block/line size in shared LLCs. In this work, we highlight that exploiting spatial locality enables much more effective cache sharing. The fundamental reason is that for many memory intensive workloads, their cache capacity requirements can be drastically reduced when a large block is employed, therefore they can effectively donate more capacity to other workloads. To leverage locality for cache partitioning, we first propose a simple yet effective mechanism to measure locality at run-time. The locality information is then used to determine both the proper block size and the capacity assigned to each workload. Our experiments show that our locality-driven cache partitioning (LCP) significantly outperforms the previous works on different LLC configurations. For a 4-core system (8-core system) with a 4MB (8MB) LLC it improves throughput by 28.9% (11.5%) in presence of an aggressive data prefetcher over the previously proposed utility-based cache partitioning algorithm.

About the speaker:

Saurabh Gupta is a Computer Engineering Ph.D. candidate in the Department of Electrical and Computer Engineering at North Carolina State University. He got his Masters and Bachelor's degree in Electrical Engineering from Indian Institute of Technology Kanpur in 2009. His research interests include processor microarchitecture, cache architecture and memory hierarchy optimizations focused on high performance systems.

Back to Research-I Seminars