Treffer: Learning Multiple Belief Propagation Fixed Points for Real Time Inference

Title:
Learning Multiple Belief Propagation Fixed Points for Real Time Inference
Contributors:
Machine Learning and Optimisation (TAO), Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Centre Inria de Saclay, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Informatique, Mathématiques et Automatique pour la Route Automatisée (IMARA), Inria Paris-Rocquencourt, ANR-08-SYSC-0017,TRAVESTI,Estimation du volume de Trafic par Inférence Spatio-temporelle(2008)
Source:
Physica A: Statistical Mechanics and its Applications. 389:149-163
Publisher Information:
CCSD; Elsevier, 2010.
Publication Year:
2010
Collection:
collection:EC-PARIS
collection:CNRS
collection:INRIA
collection:UNIV-PSUD
collection:INRIA-ROCQ
collection:INRIA-SACLAY
collection:INRIA_TEST
collection:TESTALAIN1
collection:UMR8623
collection:INRIA2
collection:LRI-AO
collection:UNIV-PARIS-SACLAY
collection:UNIV-PSUD-SACLAY
collection:ANR
Original Identifier:
HAL:
Document Type:
Zeitschrift article<br />Journal articles
Language:
English
ISSN:
0378-4371
Relation:
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.physa.2009.08.030
DOI:
10.1016/j.physa.2009.08.030
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edshal.inria.00371372v2
Database:
HAL

Weitere Informationen

In the context of inference with expectation constraints, we propose an approach based on the ``loopy belief propagation'' algorithm LBP, as a surrogate to an exact Markov Random Field MRF modelling. A prior information composed of correlations among a large set of N variables, is encoded into a graphical model; this encoding is optimized with respect to an approximate decoding procedure LBP, which is used to infer hidden variables from an observed subset. We focus on the situation where the underlying data have many different statistical components, representing a variety of independent patterns. Considering a single parameter family of models we show how LBP may be used to encode and decode efficiently such information, without solving the NP hard inverse problem yielding the optimal MRF. Contrary to usual practice, we work in the non-convex Bethe free energy minimization framework, and manage to associate a belief propagation fixed point to each component of the underlying probabilistic mixture. The mean field limit is considered and yields an exact connection with the Hopfield model at finite temperature and steady state, when the number of mixture components is proportional to the number of variables. In addition, we provide an enhanced learning procedure, based on a straightforward multi-parameter extension of the model in conjunction with an effective continuous optimization procedure. This is performed using the stochastic search heuristic CMAES and yields a significant improvement with respect to the single parameter basic model.