Treffer: Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder Translation Models

Title:
Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder Translation Models
Contributors:
Groupe d’Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole (GETALP), Laboratoire d'Informatique de Grenoble (LIG), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP), Université Grenoble Alpes (UGA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP), Université Grenoble Alpes (UGA), Naver Labs Europe [Meylan], ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
Source:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). :4557-4572
Publisher Information:
CCSD; Association for Computational Linguistics, 2022.
Publication Year:
2022
Collection:
collection:UGA
collection:CNRS
collection:INPG
collection:LIG
collection:LIG_TDCGE_GETALP
collection:GENCI
collection:POLYTECH-GRENOBLE
collection:MIAI
collection:PNRIA
collection:UGA-EPE
collection:ANR
collection:LIG_SIDCH
collection:ANR-IA-19
collection:ANR-IA
collection:TEST-UGA
Subject Geographic:
Original Identifier:
ARXIV: 2103.17151
HAL: hal-03847719
Document Type:
Konferenz conferenceObject<br />Conference papers
Language:
English
Relation:
info:eu-repo/semantics/altIdentifier/arxiv/2103.17151; info:eu-repo/semantics/altIdentifier/doi/10.18653/v1/2022.acl-long.312
DOI:
10.18653/v1/2022.acl-long.312
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edshal.hal.03847719v1
Database:
HAL

Weitere Informationen

Multi-encoder models are a broad family of context-aware neural machine translation systems that aim to improve translation quality by encoding document-level contextual information alongside the current sentence. The context encoding is undertaken by contextual parameters, trained on document-level data. In this work, we discuss the difficulty of training these parameters effectively, due to the sparsity of the words in need of context (i.e., the training signal), and their relevant context. We propose to pre-train the contextual parameters over split sentence pairs, which makes an efficient use of the available data for two reasons. Firstly, it increases the contextual training signal by breaking intra-sentential syntactic relations, and thus pushing the model to search the context for disambiguating clues more frequently. Secondly, it eases the retrieval of relevant context, since context segments become shorter. We propose four different splitting methods, and evaluate our approach with BLEU and contrastive test sets. Results show that it consistently improves learning of contextual parameters, both in low and high resource settings.