Result: Comparison of scalable distributed algorithms for assessing the kNNG in multi-GPU
Further Information
Many applications, require finding a dataset’s k-Nearest Neighbors Graph (kNNG), crucial for many Machine Learning tasks like clustering and anomaly detection. However, its computation can be costly due to the complexity of finding all kNN for every data point. To address this issue, scalable approximated algorithms have been proposed to speed up the kNNG and maintain its quality. This paper presents an adaption of NNDescent using multi-GPU and an experimental comparison of distributed and parallel approximate kNNG algorithms in GPUs, assessing their scalability, computational cost, and solution quality. Our goal is to identify the most efficient method without significant accuracy loss, enabling faster techniques and handling large datasets.