Image, Modélisation, Analyse, GEométrie, Synthèse (IMAGES), Laboratoire Traitement et Communication de l'Information (LTCI), Institut Mines-Télécom Paris (IMT)-Télécom Paris, Institut Mines-Télécom Paris (IMT)-Institut Polytechnique de Paris (IP Paris)-Institut Polytechnique de Paris (IP Paris)-Institut Mines-Télécom Paris (IMT)-Télécom Paris, Institut Mines-Télécom Paris (IMT)-Institut Polytechnique de Paris (IP Paris)-Institut Polytechnique de Paris (IP Paris), Département Images, Données, Signal (IDS), Télécom ParisTech, Institut Polytechnique de Paris (IP Paris)
Source:
Eighth International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). ; https://hal.science/hal-03186499 ; Eighth International Conference on Scale Space and Variational Methods in Computer Vision (SSVM)., May 2021, Cabourg (virtuel), France
International audience ; Deep neural networks have recently surpassed other image restoration methods which rely on hand-crafted priors. However, such networks usually require large databases and need to be retrained for each new modality. In this paper, we show that we can reach nearoptimal performances by training them on a synthetic dataset made of realizations of a dead leaves model, both for image denoising and superresolution. The simplicity of this model makes it possible to create large databases with only a few parameters. We also show that training a network with a mix of natural and synthetic images does not affect results on natural images while improving the results on dead leaves images, which are classically used for evaluating the preservation of textures. We thoroughly describe the image model and its implementation, before giving experimental results.