Result: Accelerating massive data processing in Python with Heat (Tutorial)

Title:
Accelerating massive data processing in Python with Heat (Tutorial)
Publication Year:
2024
Collection:
German Aerospace Center: elib - DLR electronic library
Document Type:
Conference conference object
Language:
unknown
Relation:
Comito, Claudia und Hoppe, Fabian (2024) Accelerating massive data processing in Python with Heat (Tutorial). 4th conference for Research Software Engineering in Germany deRSE24, 2024-03-05 - 2024-03-07, Würzburg, Deutschland.
Accession Number:
edsbas.8EE40734
Database:
BASE

Further Information

Manipulating and processing massive data sets is challenging. For the vast majority of research communities, the standard approach involves setting up Python pipelines to break up and analyze data in smaller chunks, an inefficient and prone-to-errors process. The problem is exacerbated on GPUs, because of the smaller available memory. Popular solutions to distribute NumPy/SciPy computations are based on task parallelism, introducing significant runtime overhead, complicating implementation, and often limiting GPU support to specific vendors. In this tutorial, we will show you an alternative based on data parallelism. The open-source library Heat [1] builds on PyTorch and mpi4py to simplify porting of NumPy/SciPy-based code to GPU (CUDA, ROCm, including multi-GPU, multi-node clusters). Under the hood, Heat distributes massive memory-intensive operations and algorithms via MPI communication, achieving significant speed-ups compared to task-distributed frameworks. On the surface however, Heat implements a NumPy-like API, is largely interoperable with the Python array ecosystem, and can be employed seamlessly as a backend to accelerate existing single-CPU pipelines, as well as develop new HPC applications from scratch. You will get an overview of: - Heat's basics: getting started with distributed I/O, data decomposition scheme, array operations - Existing functionalities: multi-node linear algebra, statistics, signal processing, machine learning. - DIY how-to: using existing Heat infrastructure to build your own multi-node, multi-GPU research software. We'll also touch upon Heat's implementation roadmap, and possible paths to collaboration.