fév. 2024
Intervenant : | Florence Regol |
Institution : | McGill |
Heure : | 15h45 - 16h45 |
Lieu : | 3L15 |
Large pretrained models, coupled with fine-tuning, are slowly becoming established as the dominant architecture in machine learning. Unfortunately, these models are highly computationally intensive. Early-exiting dynamic neural networks (EDNN) circumvent this issue by allowing a model to make some of its predictions from intermediate layers (i.e., early-exit). An EDNN model can be viewed as a sequence of classifiers of increasing performance that are evaluated sequentially until the prediction is satisfactory. The main goal of an EDNN is to identify the "easy" samples for which less computation is required to obtain good inference, which is very close to the task of uncertainty quantification – how confident the (intermediate) model is in its prediction. Training an EDNN architecture is challenging as it consists of two intertwined components: the gating mechanism (GM) that controls early-exiting decisions and the intermediate inference modules (IMs) that perform inference from intermediate representations. This presentation will focus on current solutions and open problems related to uncertainty characterization and modeling challenges of EDNNs.