Cecilia Panigutti and Fosca Giannotti from the Scuola Normale Superiore (SNS), Dino Pedreschi from the University of Pisa and Andrea Beretta from CNR-ISTI have won the Honorable Mention Award for their paper:

Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems

Trust in AI applications exists on a spectrum that goes from distrust to over-reliance. In the context of the use of AI systems in high-stakes applications such as clinical Decision Support Systems (DSS), having an appropriate level of trust in AI suggestions is of pivotal importance to ensure human oversight over the system. Ideally, AI explanations should help with the trust calibration process, allowing doctors to adjust their level of trust according to the actual reliability of the AI system. Our paper investigates the impact of AI explanations on trust and technology adoption in healthcare. We conducted an online user study recruiting 28 healthcare professionals and designed an estimation task that allowed us to evaluate their implicit trust in an AI-based clinical DSS that explained its suggestions. Comparing this explainable clinical DSS to its black-box version, we showed that healthcare professionals trusted more the AI system when it explained why it reached its conclusion.