Giovani in un'ora - Ciclo di seminari - Seconda parte

Day - Time: 05 February 2020, h.11:00
Place: Area della Ricerca CNR di Pisa - Room: C-29
Speakers
Referent

Andrea Esuli

Abstract

Parvaneh Parvin - "Just-in-time Adaptive Anomaly Detection andPersonalized Health Feedback"Abstract: Rapid population aging and the availability of sensorsand intelligent objects motivate the development of healthcaresystems; these systems, in turn, meet the needs of older adultsby supporting them to accomplish their day-to-day activities.Collecting information regarding older adults' daily activitypotentially helps to detect abnormal behavior. Anomaly detectioncan subsequently be combined with real-time, continuous andpersonalized interventions to help older adults actively enjoy ahealthy lifestyle. During my collaboration with the JADS Lab inNetherland, we presented an approach to generate personalizedhealth feedback to encourage behaviors conducive to a healthierlifestyle. Our method uses a Mamdani-type fuzzy rule-basedcomponent to predict the level of intervention needed for eachdetected anomaly and a sequential decision-making algorithm,Contextual Multi-armed Bandit, to generate suggestions tominimize anomalous behavior. During this talk, I will brieflydescribe the system’s architecture and I will provide someexample implementations for corresponding health feedback. Theresult of this collaboration has been published in the Journal ofAmbient Intelligence and Smart Environments, 11.5, 2019.

Riccardo Guidotti - "Black Box Explanation by Learning ImageExemplars in the Latent Feature Space"Abstract: We present an approach to explain the decisions ofblack box models for image classification. While using the blackbox to label images, our explanation method exploits the latentfeature space learned through an adversarial autoencoder. Theproposed method first generates exemplar images in the latentfeature space and learns a decision tree classifier. Then, itselects and decodes exemplars respecting local decision rules.Finally, it visualizes them in a manner that shows to the userhow the exemplars can be modified to either stay within theirclass, or to become counter-factuals by ''morphing'' into anotherclass. Since we focus on black box decision systems for imageclassification, the explanation obtained from the exemplars alsoprovides a saliency map highlighting the areas of the image thatcontribute to its classification, and areas of the image thatpush it into another class. We present the results of anexperimental evaluation on three datasets and two black boxmodels. Besides providing the most useful and interpretableexplanations, we show that the proposed method outperformsexisting explainers in terms of fidelity, relevance, coherence,and stability.

Latest Announcements