Result: Structural learning of neural network for continuous valued output: Effect of penalty term to hidden units
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Further Information
Multilayer feed forward networks with back propagation learning are widely used for function approximation but the learned networks rarely reveal the input output relationship explicitly. Structural learning methods are proposed to optimize the network topology as well as to add interpretation to its internal behaviour. Effective structural learning approaches for optimization and internal interpretation of the neural networks like structural learning with forgetting (SLF) or fast integration learning (FIL) have been proved useful for problems with binary outputs. In this work a new structural learning method based on modification of SLF and FIL has been proposed for problems with continuous valued outputs. The effectiveness of the proposed learning method has been demonstrated by simulation experiments with continuous valued functions.