Result: Kernelized vector quantization in gradient-descent learning : Advances in Self-Organizing Maps
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Further Information
Prototype based vector quantization is usually proceeded in the Euclidean data space. In the last years, also non-standard metrics became popular. For classification by support vector machines, Hilbert space representations, which are based on so-called kernel metrics, seem to be very successful. In this paper we show that gradient based learning in prototype-based vector quantization is possible by means of kernel metrics instead of the standard Euclidean distance. We will show that an appropriate handling requires differentiable universal kernels defining the feature space metric. This allows a prototype adaptation in the original data space but equipped with a metric determined by the kernel and, therefore, it is isomorphic to respective kernel Hilbert space. However, this approach avoids the Hilbert space representation as known for support vector machines. We give the mathematical justification for the isomorphism and demonstrate the abilities and the usefulness of this approach for several examples including both artificial and real world datasets.