Treffer: Low-Rank Equilibrium Propagation: An Online Incremental Learning Architecture for Analog-Based Hardware Accelerators

Title:
Low-Rank Equilibrium Propagation: An Online Incremental Learning Architecture for Analog-Based Hardware Accelerators
Contributors:
ADAptive Computing (LIRMM, Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Universität Bremen Deutschland = University of Bremen Germany = Université de Brême Allemagne, ANR-23-PEIA-0002,EMERGENCES,Near-physics emerging models for embedded AI(2023)
Source:
MOCAST 2025 - 14th International Conference on Modern Circuits and Systems Technologies ; https://hal.science/hal-05098393 ; MOCAST 2025 - 14th International Conference on Modern Circuits and Systems Technologies, Jun 2025, Dresden, Germany. ⟨10.1109/MOCAST65744.2025.11083959⟩
Publisher Information:
CCSD
Publication Year:
2025
Collection:
Université de Montpellier: HAL
Subject Geographic:
Document Type:
Konferenz conference object
Language:
English
DOI:
10.1109/MOCAST65744.2025.11083959
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edsbas.EE985545
Database:
BASE

Weitere Informationen

International audience ; Edge machine learning is emerging as a fundamental technology to address the rapid growth of smart devices and sensors. However, deploying deep learning at the edge remains difficult due to limited memory and computational capacity. Among analog processing solutions, Equilibrium Propagation (EP) has emerged as a promising alternative to backpropagation, offering the potential for significant gains in embedded systems. Yet, its practical implementation remains challenging.In this work, we introduce a novel method to address the challenges of EP by proposing a low-rank approximation of the EP algorithm. This approach enables efficient, end-to-end online incremental training on analog neuromorphic accelerators, which is essential for handling the aging of analog devices, imprecise programming of memristors, and other hardware imperfections.While EP offers a unified forward and backward process, its standard implementation requires storing gradients for each device, leading to substantial overhead in both area and power. Our Low-Rank Equilibrium Propagation (LOREP) scheme mitigates the need for high-precision gradient storage and reduces read–write cycles by approximating gradients with low-rank factors. Experimental results on two popular datasets show that LOREP recovers 2–3% of the lost accuracy while requiring less than 5% of the gradient capacitors needed for full-scale training, highlighting its potential for deployment in resource-constrained environments.