Treffer: DMCFMDA: A dual-channel multi-source cross-modal fusion network with contrastive learning for microbe–disease prediction.

Title:
DMCFMDA: A dual-channel multi-source cross-modal fusion network with contrastive learning for microbe–disease prediction.
Authors:
Wu, Yangxiang1,2 (AUTHOR) 6233110043@stu.jiangnan.edu.cn, Hu, Mingyi1,3 (AUTHOR) 6213113059@stu.jiangnan.edu.cn, Zhu, Jinlin1,2,3 (AUTHOR) wx_zjl@jiangnan.edu.cn
Source:
Biomedical Signal Processing & Control. Dec2025:Part B, Vol. 110, pN.PAG-N.PAG. 1p.
Database:
Supplemental Index

Weitere Informationen

Dysbiosis of microbes is linked to various diseases, making the prediction of microbe–disease associations crucial for precision medicine. Existing computational methods often oversimplify feature integration and fail to capture the complex dual-modal relationships, limiting prediction accuracy. To overcome these challenges, we propose DMCFMDA, Dual-Channel Multi-Source Cross-Modal Fusion model that utilizes three types of networks as input: the microbe–disease association network, the microbe similarity network, and the disease similarity network. DMCFMDA utilizes the TransGAT module to extract the similarity features of microbes and diseases, this module integrates a Graph Transformer (GT) to capture long-range dependencies within similarity networks and employs a multi-head Graph Attention Network (GAT) for fine-grained feature refinement. In parallel, multi-head GAT is employed to extract microbe network features and disease network features from the microbe–disease association network. Subsequently, DMCFMDA performs dual-modal adaptive fusion on the extracted similarity features and network features of microbes and diseases, while integrating cross-modal contrastive learning. This fusion not only reduces feature redundancy but also enhances the complementarity between features, enabling a more comprehensive and robust representation of microbe and disease embeddings. Residual connections prevent over-smoothing and stabilize deep representation learning. Benchmarking on three datasets shows that DMCFMDA achieves superior performance compared to several leading approaches in predicting microbe–disease interactions. These findings underscore DMCFMDA's robustness and its potential to elucidate complex microbe–disease interactions, driving innovative discoveries in biomedical science. Python codes and data set are available at: https://github.com/wyxzp1213/DMCFMDA • 1 Innovative Dual-Channel Cross-Modal Fusion Framework for Microbe–Disease Association Prediction • 2 Advanced TransGAT Module, combining GT and GAT to model global and local features • 3 Adaptive fusion and contrastive learning align multi-modal network representations • 4 DMCFMDA outperforms baselines and shows strong potential in bioinformatics research [ABSTRACT FROM AUTHOR]