Treffer: SparseNet: Coordinate Descent With Nonconvex Penalties

Title:
SparseNet: Coordinate Descent With Nonconvex Penalties
Source:
Journal of the American Statistical Association. 106(495):1125-1138
Publisher Information:
Alexandria, VA: American Statistical Association, 2011.
Publication Year:
2011
Physical Description:
print, 1/2 p
Original Material:
INIST-CNRS
Subject Terms:
Mathematics, Mathématiques, Sciences exactes et technologie, Exact sciences and technology, Sciences et techniques communes, Sciences and techniques of general use, Mathematiques, Mathematics, Analyse mathématique, Mathematical analysis, Calcul des variations et contrôle optimal, Calculus of variations and optimal control, Probabilités et statistiques, Probability and statistics, Statistiques, Statistics, Généralités, General topics, Applications, Analyse numérique. Calcul scientifique, Numerical analysis. Scientific computation, Analyse numérique, Numerical analysis, Méthodes numériques en programmation mathématique, optimisation et calcul variationnel, Numerical methods in mathematical programming, optimization and calculus of variations, Optimisation et calcul variationnel numériques, Numerical methods in optimization and calculus of variations, Algorithme, Algorithm, Algoritmo, Algèbre linéaire numérique, Numerical linear algebra, Algebra lineal numérica, Analyse numérique, Numerical analysis, Análisis numérico, Calcul variationnel, Variational calculus, Cálculo de variaciones, Convergence, Convergencia, Degré liberté, Freedom degree, Grado libertad, Fonction seuil, Threshold function, Función umbral, Modèle linéaire, Linear model, Modelo lineal, Méthode optimisation, Optimization method, Método optimización, Méthode pénalité, Penalty method, Método penalidad, Méthode régularisation, Regularization method, Método regularización, Méthode statistique, Statistical method, Método estadístico, Optimisation, Optimization, Optimización, Performance algorithme, Algorithm performance, Resultado algoritmo, Problème mal posé, Ill posed problem, Problema mal planteado, Problème sélection, Selection problem, Problema selección, Programmation mathématique, Mathematical programming, Programación matemática, Régression statistique, Statistical regression, Regresión estadística, Régularisation, Regularization, Regularización, Surface, Superficie, Sélection modèle, Model selection, Selección modelo, 49XX, 62Jxx, 65F22, 65J20, 65K10, 65Kxx, Sélection variable, Variable selection, Degrees of freedom, LASSO, Nonconvex optimization, Regularization surface, Sparse regression
Document Type:
Fachzeitschrift Article
File Description:
text
Language:
English
Author Affiliations:
Department of Statistics, Stanford University, Stanford, CA 94305, United States
Departments of Statistics and Health, Research and Policy, Stanford University, Stanford, CA 94305, United States
ISSN:
0162-1459
Rights:
Copyright 2015 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Mathematics
Accession Number:
edscal.24734649
Database:
PASCAL Archive

Weitere Informationen

We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.