Result: Self-organizing maps with information theoretic learning : Advances in Self-Organizing Maps

Title:
Self-organizing maps with information theoretic learning : Advances in Self-Organizing Maps
Source:
Neurocomputing (Amsterdam). 147:3-14
Publisher Information:
Amsterdam: Elsevier, 2015.
Publication Year:
2015
Physical Description:
print, 37 ref
Original Material:
INIST-CNRS
Subject Terms:
Cognition, Computer science, Informatique, Sciences exactes et technologie, Exact sciences and technology, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Logiciel, Software, Organisation des mémoires. Traitement des données, Memory organisation. Data processing, Traitement des données. Listes et chaînes de caractères, Data processing. List processing. Character string processing, Intelligence artificielle, Artificial intelligence, Apprentissage et systèmes adaptatifs, Learning and adaptive systems, Connexionnisme. Réseaux neuronaux, Connectionism. Neural networks, Telecommunications et theorie de l'information, Telecommunications and information theory, Théorie de l'information, du signal et des communications, Information, signal and communications theory, Théorie de l'information, Information theory, Algorithme Kohonen, Kohonen algorithm, Algoritmo Kohonen, Algorithme apprentissage, Learning algorithm, Algoritmo aprendizaje, Analyse amas, Cluster analysis, Analisis cluster, Analyse donnée, Data analysis, Análisis datos, Approche probabiliste, Probabilistic approach, Enfoque probabilista, Autoorganisation, Self organization, Autoorganización, Complexité algorithme, Algorithm complexity, Complejidad algoritmo, Complexité calcul, Computational complexity, Complejidad computación, Entropie, Entropy, Entropía, Erreur quadratique moyenne, Mean square error, Error medio cuadrático, Fonction noyau, Kernel function, Función núcleo, Fonction potentiel, Potential function, Función potencial, Fouille donnée, Data mining, Busca dato, Grossissement, Magnification, Aumento, Hétéroscedasticité, Heteroscedasticity, Heteroscedasticidad, Largeur bande, Bandwidth, Anchura banda, Loi probabilité, Probability distribution, Ley probabilidad, Modélisation, Modeling, Modelización, Moment statistique, Statistical moment, Momento estadístico, Méthode noyau, Kernel method, Método núcleo, Métrique, Metric, Métrico, Processus Gauss, Gaussian process, Proceso Gauss, Productique, Computer integrated manufacturing, Robótica, Reconnaissance forme, Pattern recognition, Reconocimiento patrón, Réseau neuronal, Neural network, Red neuronal, Similitude, Similarity, Similitud, Suréchantillonnage, Oversampling, Sobremuestreo, Théorie information, Information theory, Teoría información, Valeur d'information, Information value, Valor información, Visualisation donnée, Data visualization, Visualización de datos, Information theoretic learning, Kernel methods, Magnification factor, SOM
Document Type:
Academic journal Article
File Description:
text
Language:
English
Author Affiliations:
Computational NeuroEngineering Lab, University of Florida, Gainesville, FL 32608, United States
ISSN:
0925-2312
Rights:
Copyright 2015 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Telecommunications and information theory
Accession Number:
edscal.28836728
Database:
PASCAL Archive

Further Information

The self-organizing map (SOM) is one of the popular clustering and data visualization algorithms and has evolved as a useful tool in pattern recognition, data mining since it was first introduced by Kohonen. However, it is observed that the magnification factor for such mappings deviates from the information-theoretically optimal value of 1 (for the SOM it is 2/3). This can be attributed to the use of the mean square error to adapt the system, which distorts the mapping by oversampling the low probability regions. In this work. we first discuss the kernel SOM in terms of a similarity measure called correntropy induced metric (CIM) and empirically show that this can enhance the magnification of the mapping without much increase in the computational complexity of the algorithm. We also show that adapting the SOM in the CIM sense is equivalent to reducing the localized cross information potential, an information-theoretic function that quantifies the similarity between two probability distributions. Using this property we propose a kernel bandwidth adaptation algorithm for Gaussian kernels, with both homoscedastic and heteroscedastic components. We show that the proposed model can achieve a mapping with optimal magnification and can automatically adapt the parameters of the kernel function.