Result: Compiler-managed partitioned data caches for low power

Title:
Compiler-managed partitioned data caches for low power
Source:
Proceedings of the 2007 ACM SIGPLAN-SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES 2007), San Diego, California, June 13-15, 2007ACM SIGPLAN notices. 42(7):237-247
Publisher Information:
Broadway, NY: ACM, 2007.
Publication Year:
2007
Physical Description:
print, 42 ref
Original Material:
INIST-CNRS
Subject Terms:
Computer science, Informatique, Sciences exactes et technologie, Exact sciences and technology, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Logiciel, Software, Langages de programmation, Programming languages, Systèmes informatiques et systèmes répartis. Interface utilisateur, Computer systems and distributed systems. User interface, Electronique, Electronics, Electronique des semiconducteurs. Microélectronique. Optoélectronique. Dispositifs à l'état solide, Semiconductor electronics. Microelectronics. Optoelectronics. Solid state devices, Circuits intégrés, Integrated circuits, Circuits intégrés par fonction (dont mémoires et processeurs), Integrated circuits by function (including memories and processors), Antémémoire, Cache memory, Antememoria, Calculateur embarqué, Boarded computer, Calculador embarque, Caractéristique dynamique, Dynamic characteristic, Característica dinámica, Compilateur, Compiler, Compilador, Consommation énergie électrique, Power consumption, Consommation énergie, Energy consumption, Consumo energía, Décision optimale, Optimal decision, Decisión optimal, Economies d'énergie, Energy savings, Ahorros energía, Langage programmation, Programming language, Lenguaje programación, Prise décision, Decision making, Toma decision, Puissance faible, Low power, Potencia débil, Redondance, Redundancy, Redundancia, Remplacement, Replacement, Reemplazo, Retard, Delay, Retraso, Complexité matériel, Hardware complexity, Complejidad hardware, Partition donnée, Data partition, Partición dato, Algorithms, Design, Experimentation, Partitioned cache, Performance, embedded processor, hardware/software co-managed cache, instruction-driven cache management, low-power
Document Type:
Conference Conference Paper
File Description:
text
Language:
English
Author Affiliations:
Java, Compilers, and Tools Laboratory Hewlett-Packard Company, Cupertino, CA, United States
Advanced Computer Architecture Laboratory University of Michigan, Ann Arbor, MI, United States
ISSN:
1523-2867
Rights:
Copyright 2007 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Electronics
Accession Number:
edscal.19154372
Database:
PASCAL Archive

Further Information

Set-associative caches are traditionally managed using hardware-based lookup and replacement schemes that have high energy overheads. Ideally, the caching strategy should be tailored to the application's memory needs, thus enabling optimal use of this on-chip storage to maximize performance while minimizing power consumption. However, doing this in hardware alone is difficult due to hardware complexity, high power dissipation, overheads of dynamic discovery of application characteristics, and increased likelihood of making locally optimal decisions. The compiler can instead determine the caching strategy by analyzing the application code and providing hints to the hardware. We propose a hardware/software co-managed partitioned cache architecture in which enhanced load/store instructions are used to control fine-grain data placement within a set of cache partitions. In comparison to traditional partitioning techniques, load and store instructions can individually specify the set of partitions for lookup and replacement. This fine grain control can avoid conflicts, thus providing the performance benefits of highly associative caches, while saving energy by eliminating redundant tag and data array accesses. Using four direct-mapped partitions, we eliminated 25% of the tag checks and recorded an average 15% reduction in the energy-delay product compared to a hardware-managed 4-way set-associative cache.