Result: An alternative way of presenting statistical test results when evaluating the performance of stochastic approaches

Title:
An alternative way of presenting statistical test results when evaluating the performance of stochastic approaches
Source:
Neurocomputing (Amsterdam). 147:235-238
Publisher Information:
Amsterdam: Elsevier, 2015.
Publication Year:
2015
Physical Description:
print, 25 ref
Original Material:
INIST-CNRS
Subject Terms:
Cognition, Computer science, Informatique, Sciences exactes et technologie, Exact sciences and technology, Sciences appliquees, Applied sciences, Recherche operationnelle. Gestion, Operational research. Management science, Recherche opérationnelle et modèles formalisés de gestion, Operational research and scientific management, Flots dans les réseaux. Problèmes combinatoires, Flows in networks. Combinatorial problems, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Informatique théorique, Theoretical computing, Algorithmique. Calculabilité. Arithmétique ordinateur, Algorithmics. Computability. Computer arithmetics, Algorithme randomisé, Randomized algorithm, Algoritmo aleatorizado, Algorithme évolutionniste, Evolutionary algorithm, Algoritmo evoluciónista, Analyse statistique, Statistical analysis, Análisis estadístico, Approche probabiliste, Probabilistic approach, Enfoque probabilista, Diagramme Hasse, Hasse diagram, Diagrama Hasse, Digraphe, Digraph, Digrafo, Graphe acyclique, Acyclic graph, Grafo acíclico, Graphe orienté, Directed graph, Grafo orientado, Optimisation combinatoire, Combinatorial optimization, Optimización combinatoria, Relation ordre partiel, Partial ordering, Relación orden parcial, Résolution problème, Problem solving, Resolución problema, Scénario, Script, Argumento, Test statistique, Statistical test, Test estadístico, Directed acyclic graph, Optimization, Statistical tests, Stochastic algorithms
Document Type:
Academic journal Article
File Description:
text
Language:
English
Author Affiliations:
USTC-Birmingham Joint Research Institute in Intelligent Computation and Its Applications (UBRI), University of Science and Technology of China, Hefei 230027, Anhui, China
School of Design, Communication and Information Technology, The University of Newcastle, Callaghan, NSW 2308, Australia
ISSN:
0925-2312
Rights:
Copyright 2015 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Operational research. Management
Accession Number:
edscal.28836746
Database:
PASCAL Archive

Further Information

Stochastic approaches such as evolutionary algorithms have been widely used in various science and engineering problems. When comparing the performance of a set of stochastic algorithms, it is necessary to statistically evaluate which algorithms are the most suitable for solving a given problem. The outcome of statistical tests comparing N ≥ 2 processes, where N is the number of algorithms, is often presented in tables. This can become confusing for larger numbers of N. Such a scenario is, however, very common in both numerical and combinatorial optimization as well as in the domain of stochastic algorithms in general. In this letter, we introduce an alternative way of visually presenting the results of statistical tests for multiple processes in a compact and easy-to-read manner using a directed acyclic graph (DAG), in the form of a simplified Hasse diagram. The rationale of doing so is based on the fact that the outcome of the tests is always at least a strict partial order, which can be appropriately presented via a DAG. The goal of this brief communication is to promote the use of this approach as a means for presenting the results of comparisons between different optimization methods.