Treffer: ELEMENT-WISE MULTIPLICATION OF TENSOR TRAINS.
Weitere Informationen
We present the tensor train multiplication (TTM) algorithm for the elementwise mul-tiplication of two tensor trains with bond dimension X. The computational complexity and memory requirements of the TTM algorithm scale as x³ and x², respectively. This represents a significant improvement compared with the conventional approach, where the computational complexity scales as x<sup>4</sup> and memory requirements scale as x³. We benchmark the TTM algorithm using flows obtained from artificial turbulence generation and numerically demonstrate its improved runtime and memory scaling compared with the conventional approach. The TTM algorithm paves the way toward GPU accelerated tensor network simulations of computational fluid dynamics problems with large bond dimensions due to its dramatic improvement in memory scaling. [ABSTRACT FROM AUTHOR]
Copyright of SIAM Journal on Scientific Computing is the property of Society for Industrial & Applied Mathematics and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)