Result: Hardware Implementation of Improved Banker’s Fixed-Point Rounding Algorithm

Title:
Hardware Implementation of Improved Banker’s Fixed-Point Rounding Algorithm
Source:
IEEE Access, Vol 13, Pp 36679-36686 (2025)
Publisher Information:
IEEE, 2025.
Publication Year:
2025
Collection:
LCC:Electrical engineering. Electronics. Nuclear engineering
Document Type:
Academic journal article
File Description:
electronic resource
Language:
English
ISSN:
2169-3536
DOI:
10.1109/ACCESS.2025.3543540
Accession Number:
edsdoj.645e97cb10a64d9fb77efb3e2d44459c
Database:
Directory of Open Access Journals

Further Information

In recent years, FPGA-based convolutional neural networks (CNNs) accelerator has received tremendous research interest, especially in fields such as autonomous driving and robotics. For the purpose of accelerating convolution computations, Winograd fast convolution algorithm is frequently employed. However, during implementation of the Winograd algorithm on FPGA, multiple rounding operations occur, and the accuracy of these operations substantially impacts the convolution results. The banker’s rounding algorithm, compared to other rounding algorithms, has advantages such as a more symmetric error distribution and smaller errors, making it suitable for Winograd convolution computation. However, the conventional banker’s rounding algorithm is proposed for floating-point calculations, yet FPGA implements fixed-point arithmetic. Moreover, it frequently rounds 0.5 to 0, leading to the issue of convolution weight invalidation and introducing significant errors. To overcome these challenges, an improved hardware circuit designed for implementing the fixed-point banker’s rounding algorithm is proposed. Experimental results show that compared with common rounding up and rounding down methods, the proposed algorithm exhibits smaller errors and effectively resolves the issue of weight invalidation in conventional banker’s rounding, leading to a significant 55.6% improvement in computational accuracy.