Treffer: Generalized neuron: Feedforward and recurrent architectures

Title:
Generalized neuron: Feedforward and recurrent architectures
Source:
Neural networks. 22(7):1011-1017
Publisher Information:
Kidlington: Elsevier, 2009.
Publication Year:
2009
Physical Description:
print, 1/2 p
Original Material:
INIST-CNRS
Subject Terms:
Cognition, Electronics, Electronique, Computer science, Informatique, Neurology, Neurologie, Sciences exactes et technologie, Exact sciences and technology, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Intelligence artificielle, Artificial intelligence, Reconnaissance des formes. Traitement numérique des images. Géométrie algorithmique, Pattern recognition. Digital image processing. Computational geometry, Connexionnisme. Réseaux neuronaux, Connectionism. Neural networks, Telecommunications et theorie de l'information, Telecommunications and information theory, Théorie de l'information, du signal et des communications, Information, signal and communications theory, Traitement du signal, Signal processing, Reconnaissance des formes, Pattern recognition, Approximation fonction, Function approximation, Architecture, Arquitectura, Biomimétique, Biomimetics, Boucle anticipation, Feedforward, Ciclo anticipación, Calcul évolutionniste, Evolutionary computation, Classification forme, Pattern classification, Conception compacte, Compact design, Concepción compacta, Estimation densité, Density estimation, Estimación densidad, Fonction non linéaire, Non linear function, Función no lineal, Gestion ressources, Resource management, Gestión recursos, Implémentation, Implementation, Implementación, Mesure densité, Density measurement, Medición densidad, Perceptron multicouche, Multilayer perceptrons, Prédiction, Prediction, Predicción, Reconnaissance forme, Pattern recognition, Reconocimiento patrón, Récidive, Relapse, Recaida, Récurrence, Recurrence, Recurrencia, Réseau multicouche, Multilayer network, Red multinivel, Réseau neuronal Hopfield, Hopfield neural nets, Réseau neuronal non bouclé, Feedforward neural nets, Réseau neuronal récurrent, Recurrent neural nets, Réseau neuronal, Neural network, Red neuronal, Surexposition, Overexposure, Sobreexposición, Série temporelle, Time series, Serie temporal, Approximation d'une fonction, Aproximación de funciones, Classification, Generalized neuron, Nonlinear function approximation, Particle swarm optimization (PSO), Recurrent generalized neuron
Document Type:
Fachzeitschrift Article
File Description:
text
Language:
English
Author Affiliations:
Real-Time Power and Intelligent Systems Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, United States
ISSN:
0893-6080
Rights:
Copyright 2009 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Telecommunications and information theory
Accession Number:
edscal.21973798
Database:
PASCAL Archive

Weitere Informationen

Feedforward neural networks such as multilayer perceptrons (MLP) and recurrent neural networks are widely used for pattern classification, nonlinear function approximation, density estimation and time series prediction. A large number of neurons are usually required to perform these tasks accurately, which makes the MLPs less attractive for computational implementations on resource constrained hardware platforms. This paper highlights the benefits of feedforward and recurrent forms of a compact neural architecture called generalized neuron (GN). This paper demonstrates that GN and recurrent GN (RGN) can perform good classification, nonlinear function approximation, density estimation and chaotic time series prediction. Due to two aggregation functions and two activation functions, GN exhibits resilience to the nonlinearities of complex problems. Particle swarm optimization (PSO) is proposed as the training algorithm for GN and RGN. Due to a small number of trainable parameters, GN and RGN require less memory and computational resources. Thus, these structures are attractive choices for fast implementations on resource constrained hardware platforms.