Result: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
Title:
Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
Authors:
Source:
ACM Computing Surveys. 57:1-35
Publication Status:
Preprint
Publisher Information:
Association for Computing Machinery (ACM), 2025.
Publication Year:
2025
Subject Terms:
FOS: Computer and information sciences, 0301 basic medicine, Computer Science - Machine Learning, 03 medical and health sciences, Computer Science - Cryptography and Security, Artificial Intelligence (cs.AI), Computer Science - Artificial Intelligence, 0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology, 01 natural sciences, Cryptography and Security (cs.CR), 0105 earth and related environmental sciences, Machine Learning (cs.LG)
Document Type:
Academic journal
Article
Language:
English
ISSN:
1557-7341
0360-0300
0360-0300
DOI:
10.1145/3708320
DOI:
10.48550/arxiv.2301.04299
Access URL:
Rights:
CC BY
CC BY NC SA
CC BY NC SA
Accession Number:
edsair.doi.dedup.....090dcb6f2dfd90ccf4a7ea8223596edc
Database:
OpenAIRE
Further Information
Multi-Agent Reinforcement Learning (MARL) is susceptible to Adversarial Machine Learning (AML) attacks. Execution-time AML attacks against MARL are complex due to effects that propagate across time and between agents. To understand the interaction between AML and MARL, this survey covers attacks and defences for MARL, Multi-Agent Learning (MAL), and Deep Reinforcement Learning (DRL). This survey proposes a novel perspective on AML attacks based on attack vectors. This survey also proposes a framework that addresses gaps in current modelling frameworks and enables the comparison of different attacks against MARL. Lastly, the survey identifies knowledge gaps and future avenues of research.