Result: JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models

Title:
JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models
Source:
IEEE Transactions on Visualization and Computer Graphics. 31:8668-8682
Publication Status:
Preprint
Publisher Information:
Institute of Electrical and Electronics Engineers (IEEE), 2025.
Publication Year:
2025
Document Type:
Academic journal Article
ISSN:
2160-9306
1077-2626
DOI:
10.1109/tvcg.2025.3575694
DOI:
10.48550/arxiv.2404.08793
Rights:
IEEE Copyright
CC BY
Accession Number:
edsair.doi.dedup.....6ce1cf25af91adb0cf673e70e853b8e2
Database:
OpenAIRE

Further Information

The proliferation of large language models (LLMs) has underscored concerns regarding their security vulnerabilities, notably against jailbreak attacks, where adversaries design jailbreak prompts to circumvent safety mechanisms for potential misuse. Addressing these concerns necessitates a comprehensive analysis of jailbreak prompts to evaluate LLMs' defensive capabilities and identify potential weaknesses. However, the complexity of evaluating jailbreak performance and understanding prompt characteristics makes this analysis laborious. We collaborate with domain experts to characterize problems and propose an LLM-assisted framework to streamline the analysis process. It provides automatic jailbreak assessment to facilitate performance evaluation and support analysis of components and keywords in prompts. Based on the framework, we design JailbreakLens, a visual analysis system that enables users to explore the jailbreak performance against the target model, conduct multi-level analysis of prompt characteristics, and refine prompt instances to verify findings. Through a case study, technical evaluations, and expert interviews, we demonstrate our system's effectiveness in helping users evaluate model security and identify model weaknesses.