Treffer: Algorithmic analysis of a basic evolutionary algorithm for continuous optimization

Title:
Algorithmic analysis of a basic evolutionary algorithm for continuous optimization
Source:
Automata, languages and programming (ICALP 2003)Theoretical computer science. 379(3):329-347
Publisher Information:
Amsterdam: Elsevier, 2007.
Publication Year:
2007
Physical Description:
print, 18 ref
Original Material:
INIST-CNRS
Subject Terms:
Computer science, Informatique, Sciences exactes et technologie, Exact sciences and technology, Sciences et techniques communes, Sciences and techniques of general use, Mathematiques, Mathematics, Analyse mathématique, Mathematical analysis, Calcul des variations et contrôle optimal, Calculus of variations and optimal control, Analyse numérique. Calcul scientifique, Numerical analysis. Scientific computation, Analyse numérique, Numerical analysis, Méthodes numériques en programmation mathématique, optimisation et calcul variationnel, Numerical methods in mathematical programming, optimization and calculus of variations, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Informatique théorique, Theoretical computing, Algorithmique. Calculabilité. Arithmétique ordinateur, Algorithmics. Computability. Computer arithmetics, Divers, Miscellaneous, Algorithme évolutionniste, Evolutionary algorithm, Algoritmo evoluciónista, Algorithmique, Algorithmics, Algorítmica, Approximation, Aproximación, Convergence, Convergencia, Distance, Distancia, Erreur approximation, Approximation error, Error aproximación, Estimation erreur, Error estimation, Estimación error, Evaluation fonction, Function evaluation, Fonction échelon, Step function, Función escala, Informatique théorique, Computer theory, Informática teórica, Minimisation, Minimization, Minimización, Mutation, Mutación, Méthode optimisation, Optimization method, Método optimización, Optimisation, Optimization, Optimización, Optimum, Optimo, Stratégie, Strategy, Estrategia, Validation, Validación, Vecteur, Vector, 49XX, 65Kxx, 68Wxx, Continuous optimization, Evolutionary algorithms, Runtime analysis
Document Type:
Konferenz Conference Paper
File Description:
text
Language:
English
Author Affiliations:
Department of Computer Science 2, Dortmund University, 44221 Dortmund, Germany
ISSN:
0304-3975
Rights:
Copyright 2008 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Mathematics
Accession Number:
edscal.18821596
Database:
PASCAL Archive

Weitere Informationen

In practical optimization, applying evolutionary algorithms has nearly become a matter of course. Their theoretical analysis, however, is far behind practice. So far, theorems on the runtime are limited to discrete search spaces; results for continuous search spaces are limited to convergence theory or even rely on validation by experiments, which is unsatisfactory from a theoretical point of view. The simplest, or most basic, evolutionary algorithms use a population consisting of only one individual and use random mutations as the only search operator. Here the so-called (1+1) evolution strategy for minimization in Rn is investigated when it uses isotropically distributed mutation vectors. In particular, so-called Gaussian mutations are analyzed when the so-called 1/5-rule is used for their adaptation. Obviously, a reasonable analysis must respect the function to be minimized, and furthermore, the runtime must be measured with respect to the approximation error. A first algorithmic analysis of how the runtime depends on n, the dimension of the search space, is presented. This analysis covers all unimodal functions that are monotone with respect to the distance from the optimum. It turns out that, in the scenario considered, Gaussian mutations in combination with the 1/5-rule indeed ensure asymptotically optimal runtime; namely, Θ(n) steps/function evaluations are necessary and sufficient to halve the approximation error.