TY - GEN
T1 - Don't Melt Your Cache
T2 - 37th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2025
AU - Bender, Michael A.
AU - Conway, Alex
AU - Delayo, Daniel
AU - Farach-Colton, Martin
AU - Han, Jaehyun
AU - He, Linfeng
AU - Johnson, Rob
AU - Kannan, Sudarsun
AU - Kuszmaul, William
AU - Porter, Donald
AU - West, Evan
N1 - Publisher Copyright: © 2025 ACM.
PY - 2025/7/16
Y1 - 2025/7/16
N2 - Perhaps the most influential result in the theory of caches is the following theorem due to Sleator and Tarjan: With O(1) resource augmentation, the basic LRU eviction policy is guaranteed to be O(1)-competitive with the optimal offline policy. Sleator and Tarjan's result applies to LRU on a fully associative cache, but does not tell us how to think about caches with low associativity, i.e., caches where each page has only d positions in which it is capable of residing. This means that many modern caches cannot directly apply the result. It is widely believed that to implement a cache with low associativity, one should still use LRU, but restricted to the d positions that are eligible for eviction. However, this low-associativity version of LRU has never been analyzed. We show that low-associativity implementations of LRU are often actually not constant-competitive algorithms. On the other hand, we give randomized eviction algorithms that are constant-competitive, and even a d-associative algorithm that, using any d = ω(1), and using 1 + o(1) resource augmentation, is 1 + o(1) competitive with the fully-associative LRU algorithm. Combined, our algorithms suggest a new way of thinking about the design of low-associativity caches, in which one intentionally designs randomized mechanisms that allow parts of the cache which are "overheating"to naturally cool down.
AB - Perhaps the most influential result in the theory of caches is the following theorem due to Sleator and Tarjan: With O(1) resource augmentation, the basic LRU eviction policy is guaranteed to be O(1)-competitive with the optimal offline policy. Sleator and Tarjan's result applies to LRU on a fully associative cache, but does not tell us how to think about caches with low associativity, i.e., caches where each page has only d positions in which it is capable of residing. This means that many modern caches cannot directly apply the result. It is widely believed that to implement a cache with low associativity, one should still use LRU, but restricted to the d positions that are eligible for eviction. However, this low-associativity version of LRU has never been analyzed. We show that low-associativity implementations of LRU are often actually not constant-competitive algorithms. On the other hand, we give randomized eviction algorithms that are constant-competitive, and even a d-associative algorithm that, using any d = ω(1), and using 1 + o(1) resource augmentation, is 1 + o(1) competitive with the fully-associative LRU algorithm. Combined, our algorithms suggest a new way of thinking about the design of low-associativity caches, in which one intentionally designs randomized mechanisms that allow parts of the cache which are "overheating"to naturally cool down.
UR - https://www.scopus.com/pages/publications/105012713992
U2 - 10.1145/3694906.3743303
DO - 10.1145/3694906.3743303
M3 - Conference contribution
T3 - Annual ACM Symposium on Parallelism in Algorithms and Architectures
SP - 555
EP - 565
BT - SPAA 2025 - Proceedings of the 2025 37th ACM Symposium on Parallelism in Algorithms and Architectures
PB - Association for Computing Machinery
Y2 - 28 July 2025 through 1 August 2025
ER -