Treffer: Low-Rank Equilibrium Propagation: An Online Incremental Learning Architecture for Analog-Based Hardware Accelerators
Weitere Informationen
International audience ; Edge machine learning is emerging as a fundamental technology to address the rapid growth of smart devices and sensors. However, deploying deep learning at the edge remains difficult due to limited memory and computational capacity. Among analog processing solutions, Equilibrium Propagation (EP) has emerged as a promising alternative to backpropagation, offering the potential for significant gains in embedded systems. Yet, its practical implementation remains challenging.In this work, we introduce a novel method to address the challenges of EP by proposing a low-rank approximation of the EP algorithm. This approach enables efficient, end-to-end online incremental training on analog neuromorphic accelerators, which is essential for handling the aging of analog devices, imprecise programming of memristors, and other hardware imperfections.While EP offers a unified forward and backward process, its standard implementation requires storing gradients for each device, leading to substantial overhead in both area and power. Our Low-Rank Equilibrium Propagation (LOREP) scheme mitigates the need for high-precision gradient storage and reduces read–write cycles by approximating gradients with low-rank factors. Experimental results on two popular datasets show that LOREP recovers 2–3% of the lost accuracy while requiring less than 5% of the gradient capacitors needed for full-scale training, highlighting its potential for deployment in resource-constrained environments.