Service restrictions from February 12-22, 2026—more information on the University Library website

Result: Translating Numerical Concepts for PDEs into Neural Architectures

Title:
Translating Numerical Concepts for PDEs into Neural Architectures
Publication Year:
2021
Collection:
Computer Science
Mathematics
Document Type:
Report Working Paper
Accession Number:
edsarx.2103.15419
Database:
arXiv

Further Information

We investigate what can be learned from translating numerical algorithms into neural networks. On the numerical side, we consider explicit, accelerated explicit, and implicit schemes for a general higher order nonlinear diffusion equation in 1D, as well as linear multigrid methods. On the neural network side, we identify corresponding concepts in terms of residual networks (ResNets), recurrent networks, and U-nets. These connections guarantee Euclidean stability of specific ResNets with a transposed convolution layer structure in each block. We present three numerical justifications for skip connections: as time discretisations in explicit schemes, as extrapolation mechanisms for accelerating those methods, and as recurrent connections in fixed point solvers for implicit schemes. Last but not least, we also motivate uncommon design choices such as nonmonotone activation functions. Our findings give a numerical perspective on the success of modern neural network architectures, and they provide design criteria for stable networks.
Comment: In A. Elmoataz, J. Fadili, Y. Qu\'eau, J. Rabin, L. Simon (Eds.): Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, Vol. 12679, Springer, Cham, 294-306, 2021