Treffer: ADAMT: Adaptive distributed multi-task learning for efficient image recognition in Mobile Ad-hoc Networks.

Title:
ADAMT: Adaptive distributed multi-task learning for efficient image recognition in Mobile Ad-hoc Networks.
Authors:
Zhao J; School of Computer Technology and Engineering, Changchun Institute of Technology, Changchun, China; College of Artificial Intelligence Technology, Changchun Institute of Technology, Changchun, China; School of Electronics Engineering and Computer Science, Peking University, Beijing, China. Electronic address: zhaojia@ccit.edu.cn., Zhao W; School of Computer Science and Engineering, Changchun University of Technology, Changchun, China. Electronic address: 2202103111@ccut.edu.cn., Zhai Y; Education Examinations Authority of Jilin Province, Changchun, China. Electronic address: zhynan@jleea.com.cn., Zhang L; School of Computer Technology and Engineering, Changchun Institute of Technology, Changchun, China; College of Artificial Intelligence Technology, Changchun Institute of Technology, Changchun, China. Electronic address: zhangly@ccit.edu.cn., Ding Y; School of Computer Technology and Engineering, Changchun Institute of Technology, Changchun, China; College of Artificial Intelligence Technology, Changchun Institute of Technology, Changchun, China. Electronic address: dingyan@ccit.edu.cn.
Source:
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2025 Jul; Vol. 187, pp. 107316. Date of Electronic Publication: 2025 Mar 06.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York : Pergamon Press, [c1988-
Contributed Indexing:
Keywords: Decentralized learning; Distributed multi-task learning; Image recognition; Mobile adhoc networks
Entry Date(s):
Date Created: 20250312 Date Completed: 20250427 Latest Revision: 20250427
Update Code:
20250428
DOI:
10.1016/j.neunet.2025.107316
PMID:
40073619
Database:
MEDLINE

Weitere Informationen

Distributed machine learning in mobile adhoc networks faces significant challenges due to the limited computational resources of devices, non-IID data distribution, and dynamic network topology. Existing approaches often rely on centralized coordination and stable network conditions, which may not be feasible in practice. To address these issues, we propose an adaptive distributed multi-task learning framework called ADAMT for efficient image recognition in resource-constrained mobile ad hoc networks. ADAMT introduces three key innovations: (1) a feature expansion mechanism that enhances the expressiveness of local models by leveraging task-specific information; (2) a deep hashing technique that enables efficient on-device retrieval and multi-task fusion; and (3) an adaptive communication strategy that dynamically adjusts the model updating process based on network conditions and node reliability. The proposed framework allows each device to perform personalized model training on its local dataset while collaboratively updating the shared parameters with neighboring nodes. Extensive experiments on the ImageNet dataset demonstrate the superiority of ADAMT over state-of-the-art methods. ADAMT achieves a top-1 accuracy of 0.867, outperforming existing distributed learning approaches. Moreover, ADAMT significantly reduces the communication overhead and accelerates the convergence speed by 2.69 times compared to traditional distributed SGD. The adaptive communication strategy effectively balances the trade-off between model performance and resource consumption, making ADAMT particularly suitable for resource-constrained environments. Our work sheds light on the design of efficient and robust distributed learning algorithms for mobile adhoc networks and paves the way for deploying advanced machine learning applications on edge devices.
(Copyright © 2025. Published by Elsevier Ltd.)

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.