Treffer: Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains

Title:
Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains
Source:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, , p. 9463-9476 (2024)
Publisher Information:
ELRA and ICCL 2024
Document Type:
E-Ressource Electronic Resource
Availability:
Open access content. Open access content
info:eu-repo/semantics/openAccess
Note:
English
Other Numbers:
UCDLC oai:dial.uclouvain.be:boreal:287780
boreal:287780
1508050187
Contributing Source:
UNIVERSITE CATHOLIQUE DE LOUVAIN
From OAIster®, provided by the OCLC Cooperative.
Accession Number:
edsoai.on1508050187
Database:
OAIster

Weitere Informationen

Pretrained Language Models (PLMs) are the de facto backbone of most state-of-the-art NLP systems. In this paper, we introduce a family of domain-specific pretrained PLMs for French, focusing on three important domains: transcribed speech, medicine, and law. We use a transformer architecture based on efficient methods (LinFormer) to maximise their utility, since these domains often involve processing long documents. We evaluate and compare our models to state-of-the-art models on a diverse set of tasks and datasets, some of which are introduced in this paper. We gather the datasets into a new French-language evaluation benchmark for these three domains. We also compare various training configurations: continued pretraining, pretraining from scratch, as well as single- and multi-domain pretraining. Extensive domain-specific experiments show that it is possible to attain competitive downstream performance even when pre-training with the approximative LinFormer attention mechanism. For full reproducibility, we release the models and pretraining data, as well as contributed datasets.