Result: Declarative, generic definition and effective implementation of transfer learning algorithms
collection:INRIA
collection:EC-NANTES
collection:INRIA-RENNES
collection:INRIA_TEST
collection:UNAM
collection:TESTALAIN1
collection:INRIA2
collection:LS2N
collection:LS2N-STACK
collection:LS2N-STACK-IMTA
collection:IMTA_DAPI
collection:LS2N-IMTA
collection:INRIA-RENGRE
collection:IMT-ATLANTIQUE
collection:INSTITUTS-TELECOM
collection:ANR
collection:NANTES-UNIVERSITE
collection:NANTES-UNIV
collection:INSTITUT-MINES-TELECOM
URL: http://creativecommons.org/licenses/by/
Further Information
Machine learning, especially deep learning, has become es- sential in many application domains. However, deep learning relies on artificial neural networks that often face resource-related limitations. For instance, data is often proprietary, model training can be costly, and using these models may be constrained by limited computational or storage resources. Transfer learning offers a solution to overcome these constraints by "transferring" a model from a source domain to a target domain, potentially in a different context. This transfer takes various forms: models can be adapted with minor structural changes (e.g., "fine- tuning"), reduced in size (e.g., "knowledge distillation"), or retrained with modified training and testing datasets (e.g., "domain adaptation"). This paper first motivates the need for a generic definitional framework and implementation support for transfer learning through a literature review. We then introduce Generic Transfer Learning (GTL), our pro- posal of such a framework. GTL supports the declarative definition of transfers through network transformations and dataset manipulations and includes corresponding Python implementation support. We finally present a case study demonstrating how to define and implement a trans- fer using GTL in the health domain.