ISTI-Talk: ‘Where is your head at?’ Accessing and Exploiting Deep Learning Model Inner Thinking
- Day - Time: 12 November 2025, h.11:00
- Place: Area della Ricerca CNR di Pisa - Room: C-29
Speakers
Referent
Abstract
Deep learning models have revolutionized thefield of image analysis, enabling a wide array of applications previouslyconstrained to complex mathematical domains. This significant gain inpredictive power, however, often comes at the cost of model interpretability.This "black box" phenomenon has inspired the development of acritical research branch known as Explainable AI (XAI).
While XAI techniques are primarily used forpost-hoc analysis to investigate a model's behavior during inference, many ofthese algorithms can be applied during training. The same methods designed fora posteriori interpretation can be adapted to actively guide a model'sgradients during the learning phase, offering a novel approach to building moretransparent and reliable systems from the beginning. This proactive guidancecan help ensure models learn meaningful features, potentially improvingrobustness and generalization.
This talk will detail the conception, design,and evolution of this novel training approach. We will explore variousalgorithms for accessing and modifying model gradients and demonstrate how toimplement a custom loss function to guide the learning process. The methodologywill be initially demonstrated on foundational datasets such as MNIST,Fashion-MNIST, and Medical-MNIST. Following this, we will broaden thediscussion to consider the application of these techniques to more complexdomains, including medical imaging and the classic CIFAR-100 benchmark.