Treffer: An information-theoretic approach to distributed learning : distributed source coding under logarithmic loss ; Approche théorie de l’information à l’apprentissage statistique : codage distribué de sources sous mesure de fidélité logarithmique

Title:
An information-theoretic approach to distributed learning : distributed source coding under logarithmic loss ; Approche théorie de l’information à l’apprentissage statistique : codage distribué de sources sous mesure de fidélité logarithmique
Authors:
Contributors:
Laboratoire d'Informatique Gaspard-Monge (LIGM), Université Paris-Est Marne-la-Vallée (UPEM)-École nationale des ponts et chaussées (ENPC)-ESIEE Paris-Fédération de Recherche Bézout (BEZOUT), Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS), Université Paris-Est, Abderrezak Rachedi
Source:
https://theses.hal.science/tel-02489021 ; Information Theory [math.IT]. Université Paris-Est, 2019. English. ⟨NNT : 2019PESC2062⟩.
Publisher Information:
CCSD
Publication Year:
2019
Document Type:
Dissertation doctoral or postdoctoral thesis
Language:
English
Relation:
NNT: 2019PESC2062
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edsbas.F464A58
Database:
BASE

Weitere Informationen

One substantial question, that is often argumentative in learning theory, is how to choose a `good' loss function that measures the fidelity of the reconstruction to the original. Logarithmic loss is a natural distortion measure in the settings in which the reconstructions are allowed to be `soft', rather than `hard' or deterministic. Logarithmic loss is widely used as a penalty criterion in various contexts, including clustering and classification, pattern recognition, learning and prediction, and image processing. Considering the high amount of research which is done recently in these fields, the logarithmic loss becomes a very important metric and will be the main focus as a distortion metric in this thesis.In this thesis, we investigate a distributed setup, so-called the Chief Executive Officer (CEO) problem under logarithmic loss distortion measure. Specifically, agents observe independently corrupted noisy versions of a remote source, and communicate independently with a decoder or CEO over rate-constrained noise-free links. The CEO also has its own noisy observation of the source and wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. One of the main contributions of the thesis is the explicit characterization of the rate-distortion region of the vector Gaussian CEO problem, in which the source, observations and side information are jointly Gaussian. For the proof of this result, we first extend Courtade-Weissman's result on the rate-distortion region of the discrete memoryless (DM) CEO problem to the case in which the CEO has access to a correlated side information stream which is such that the agents' observations are independent conditionally given the side information and remote source. Next, we obtain an outer bound on the region of the vector Gaussian CEO problem by evaluating the outer bound of the DM model by means of a technique that relies on the de Bruijn identity and the ...