Multimodal User Interfaces

Thematic Area: Knowledge

We design and develop methods, languages, tools, applications that exploit  multimodal user interfaces, which are able to interact with a system by exploiting multiple human senses, in order to improve the user experience.  We consider them both in stationary and mobile scenarios. One main goal is to obtain dynamic combinations of modalities depending on the actual context of use in order to better support end users. For this purpose we consider various modalities (exploiting different physiological parameters) such as graphics, voice, gesture, vibro-tactile feedback, gaze, brain activity, ..., which can be  combined in different ways (complementary, redundant, assignment, equivalence) depending on the desired effect.

Referent

Fabio Paternò

Projects

Latest Announcements