Result: RTD-Lite: Scalable Topological Analysis for Comparing Weighted Graphs in Learning Tasks

Title:
RTD-Lite: Scalable Topological Analysis for Comparing Weighted Graphs in Learning Tasks
Contributors:
Applied AI Institute, Université Paris Cité (UPCité), Institut de Mathématiques de Jussieu - Paris Rive Gauche (IMJ-PRG (UMR_7586)), Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Université Paris Cité (UPCité), Centre National de la Recherche Scientifique (CNRS)
Source:
28th International Conference on Artificial Intelligence and Statistics. :3826-3834
Publisher Information:
CCSD, 2025.
Publication Year:
2025
Collection:
collection:CNRS
collection:INSMI
collection:IMJ
collection:SORBONNE-UNIVERSITE
collection:SORBONNE-UNIV
collection:SU-SCIENCES
collection:UNIV-PARIS
collection:UNIVERSITE-PARIS
collection:UP-SCIENCES
collection:SU-TI
collection:ALLIANCE-SU
collection:SUPRA_MATHS_INFO
Subject Geographic:
Original Identifier:
HAL: hal-05321777
Document Type:
Conference conferenceObject<br />Conference papers
Language:
English
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edshal.hal.05321777v1
Database:
HAL

Further Information

Topological methods for comparing weighted graphs are valuable in various learning tasks but often suffer from computational inefficiency on large datasets. We introduce RTD-Lite, a scalable algorithm that efficiently compares topological features, specifically connectivity or cluster structures at arbitrary scales, of two weighted graphs with one-to-one correspondence between vertices. Using minimal spanning trees in auxiliary graphs, RTD-Lite captures topological discrepancies with O(n 2 ) time and memory complexity. This efficiency enables its application in tasks like dimensionality reduction and neural network training. Experiments on synthetic and real-world datasets demonstrate that RTD-Lite effectively identifies topological differences while significantly reducing computation time compared to existing methods. Moreover, integrating RTD-Lite into neural network training as a loss function component enhances the preservation of topological structures in learned representations.