Result: High-dimensional approximation using tree tensor networks under constraints
Further Information
In this thesis, we address high-dimensional approximation problems based on tree tensor networks under additional constraints. Motivated by the approximation of high-dimensional discrete probability distributions, we consider tree tensor network factorizations whose individual factors are themselves non-negative tensors or, more generally, tensors over real convex cones and which represent a tensor satisfying an additional equality constraint. This allows the non-negativity and the total probability mass of the original distribution to be preserved by an approximate distribution. Typically, tree tensor network factorizations are considered for tensors over (real) Hilbert spaces. We therefore introduce tensors, tree tensor networks, and associated factorizations also over finite-dimensional real convex cones. Our goal is to transfer the concepts and properties from the algebraic case (over Hilbert spaces) to the conic case (over convex cones) and to explain their limitations. We use these concepts and properties to characterize the existence of conic tree tensor network factorizations and to analyze the difficulties that arise when dealing with conic factorizations rather than algebraic ones. A crucial result is that the existence of conic factorizations depends not only on the target tensor, the tree, and the factorization sizes, but also on the explicit choice of the individual factors in the network, since they are interdependent. To solve high-dimensional approximation problems using conic tree tensor network factorization under constraints, we focus on alternating least squares methods. These separate a high-dimensional multilinear problem over all factors (with additional constraints on those factors) into a sequence of lower-dimensional linear problems in each factor individually (with additional constraints on that factor), which are then solved in an alternating fashion. An important quantity associated with such alternating methods is the expressivity in a given factor, which indicates the set of tensors that can be represented by fixing all factors except this one. Given the conic constraints on the factors, the conic expressivity in a factor depends on the choice of the representative of the equivalence class of conic factorizations representing the same tensor at the same (minimal) factorization sizes (as opposed to the algebraic expressivity). This observation motivates us to develop a method that increases the conic expressivity in a factor while remaining in the same equivalence class. For non-negative tree tensor network factorizations, we derive such a method using inverse-non-negative exchange matrices between neighboring factors in the network.
Dissertation, RWTH Aachen University, 2025; Aachen : RWTH Aachen University 1 Online-Ressource : Illustrationen (2025). doi:10.18154/RWTH-2025-06451 = Dissertation, RWTH Aachen University, 2025
Published by RWTH Aachen University, Aachen