Treffer: Model selection of extreme learning machine based on multi-objective optimization : Extreme Learning Machines Theory & Applications

Title:
Model selection of extreme learning machine based on multi-objective optimization : Extreme Learning Machines Theory & Applications
Source:
Neural computing & applications (Print). 22(3-4):521-529
Publisher Information:
London: Springer, 2013.
Publication Year:
2013
Physical Description:
print, 24 ref
Original Material:
INIST-CNRS
Subject Terms:
Computer science, Informatique, Neurology, Neurologie, Sciences exactes et technologie, Exact sciences and technology, Sciences et techniques communes, Sciences and techniques of general use, Mathematiques, Mathematics, Analyse mathématique, Mathematical analysis, Calcul des variations et contrôle optimal, Calculus of variations and optimal control, Probabilités et statistiques, Probability and statistics, Statistiques, Statistics, Inférence linéaire, régression, Linear inference, regression, Analyse numérique. Calcul scientifique, Numerical analysis. Scientific computation, Analyse numérique, Numerical analysis, Méthodes numériques en programmation mathématique, optimisation et calcul variationnel, Numerical methods in mathematical programming, optimization and calculus of variations, Optimisation et calcul variationnel numériques, Numerical methods in optimization and calculus of variations, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Intelligence artificielle, Artificial intelligence, Apprentissage et systèmes adaptatifs, Learning and adaptive systems, Ajustement, Fitting, Ajuste, Algorithme apprentissage, Learning algorithm, Algoritmo aprendizaje, Calcul neuronal, Neural computation, computación neuronal, Classification, Clasificación, Estimation biaisée, Biased estimation, Estimación sesgada, Estimation erreur, Error estimation, Estimación error, Méthode gradient, Gradient method, Método gradiente, Méthode optimisation, Optimization method, Método optimización, Neurone, Neuron, Neurona, Noeud structure, Nodes, Nudo estructura, Optimisation, Optimization, Optimización, Propagation, Propagación, Régression statistique, Statistical regression, Regresión estadística, Réseau neuronal non bouclé, Feedforward neural nets, Réseau neuronal, Neural network, Red neuronal, Sélection modèle, Model selection, Selección modelo, 49XX, 62H30, 62Jxx, 65K10, 65Kxx, Apprentissage machine, Couche cachée, Méthode sélection, Selection method, Poids optimal, Optimal weight, Extreme learning machine, Leave-one-out error bound, Multi-objective optimization
Document Type:
Fachzeitschrift Article
File Description:
text
Language:
English
Author Affiliations:
College of Computer and Information Technology, Henan Normal University, Xinxiang 453007, Henan, China
Management Institute, Xinxiang Medical University, Xinxiang 453003, Henan, China
ISSN:
0941-0643
Rights:
Copyright 2014 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Mathematics
Accession Number:
edscal.27659232
Database:
PASCAL Archive

Weitere Informationen

As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.