Recent internal Seminars  All internal seminars...

More information on internal seminars can be required to Claudia Raviolo

Computational Approaches to Nonverbal Communication: Personality Synthesis and Interaction in Embodied VR

05 November 2018, 11:00 - Location: C-29

Speakers:
Michael Neff (Depart of Computer Science and Department of Cinema and Digital Media - University of California, Davis, CA - USA)
Referent:
Roberto Scopigno

This talk will discuss two different bodies of work. The first part of the talk will examine how nonverbal communication, and in particular gesture, conveys personality to observers. Our work is framed using the Five Factor or OCEAN model of personality from social psychology. Drawing both from the psychology literature and primary perceptual research, I'll show how movement changes impact perceived character personality. I'll look at particular movement changes that influence personality traits and also show that in many cases, people may be making two distinct personality judgments, rather than five, as would be expected for the five factor personality model.
The second part of the talk will focus on how people interact in virtual reality when provided with motion tracked avatars. I'll discuss a study that compared interaction in VR with embodied avatars, interaction in a shared VR environment, but no avatars, and face-to-face interaction. Interestingly, the embodied VR condition performed comparably to face-to-face social interaction on a range of measures, including social presence and conversational turn length.

Bio:
Michael Neff is a professor in Computer Science and Cinema & Digital Media at the University of California, Davis where he leads the Motion Lab, an interdisciplinary research effort in character animation and embodied interaction. He holds a Ph.D. from the University of Toronto and is also a Certified Laban Movement Analyst. His interests include character animation, especially modeling expressive movement, nonverbal communication, gesture and applying performing arts knowledge to animation. He received an NSF CAREER Award, the Alain Fournier Award for his dissertation, two best paper awards and the Isadora Duncan Award for Visual Design. He is past Chair of the Department of Cinema and Digital Media.

Graph Transduction and Model-driven Visual Dictionary Construction with Applications in Ancient Coin Classification and Computer Vision

12 October 2018, 11:00 - Location: C-29

Speakers:
Sinem Aslan (European Centre of Living Technology (ECLT), Università Cà Foscari Venezia)
Referent:
Ercan Kuruoglu

This talk will present two of our works in the field of computer vision. I will first introduce our recent work on ancient coin classification using Graph Transduction Games (GTG). Recognizing the type of an ancient coin requires theoretical expertise and years of experience in the field of numismatics. A common way to detect the period of a discovered coin is searching through the manual books where ancient coins are indexed which requires a highly time-consuming and demanding labor. Automatizing such a manual procedure not only provides faster processing time but can also support historians and archaeologists for a more accurate decision. In order to automatize such a task, we proposed to model ancient coin image classification using a nonparametric semi-supervised learning approach, namely Graph Transduction Games (GTG). In this part, I will first introduce the challenges on this particular problem from the computer vision point of view and present our solution.
Then, I will briefly lead into a previous work, i.e. a model-driven visual dictionary construction technique named SymPaD that is developed for image understanding applications. At SymPaD, dictionary building starts from a core of shape primitives, which have commonalities with the shape models envisaged by the earliest to the latest proponents of the idea, i.e., from Marr to Griffin. We then proceed to enrich the dictionary by using detailed parametrization of the shape space and by applying nonlinear dyadic compositions. Compared with the existing model-driven schemes, our method is able to represent and characterize images in various image understanding applications with competitive, and often better performance.

Biography
Sinem Aslan is a visiting postdoctoral scholar at European Centre of Living Technology (ECLT) of Ca’ Foscari University of Venice since May 2018. Prior to that, she was a postdoctoral researcher in Imaging and Vision Laboratory (IVL) at Department of Informatics, Systems and Communication at University of Milano-Bicocca for one year. In IVL, she has been working on food images segmentation for automatic dietary assessment applications.
She obtained her Ph.D. degree (October 2016) in International Computer Institute, Ege University, Turkey, under supervisions of Prof. Bülent Sankur and Prof. E. Turhan Tunali. She mainly investigated model-based visual dictionary techniques for image understanding applications in her thesis. During her Ph.D. studies, she held a visiting position in BUSIM laboratory at Bogaziçi University, Turkey, for two semesters. Prior to that, she received her MSc. degree (2007) in International Computer Institute from Ege University and BSc. degree in Department of Electronics Engineering from Ankara University. Beyond her research background, she has a research/teaching assistant experience (in Turkey) for 13 years.
She is a reviewer for various international journals such as IEEE Transactions on Image Processing, Multimedia Tools and Applications, Pattern Analysis and Applications, Signal, Image and Video Processing, IET Image Processing, Turkish Journal of Electrical Engineering and Computer Sciences and for the conference IEEE SIU.

Deep Image Quality Metric

10 October 2018, 11:15 - Location: C-29

Speakers:
Francesco Banterle
Referent:
Roberto Scopigno

Image metrics based on human visual system (HVS) play a remarkable role in the evaluation of complex image processing algorithms. However, mimicking the HVS is typically complex, and it demands high computational resources (both in terms of time and memory) that limit their usage to a few applications or limit the size of the input data, making such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric (DIQM), a deep-learning approach to learn visual metric features such as predicting the probability of visibility changes and global image quality (mean-opinion-score). DIQM can reduce the computational costs of existing visual metrics by more than an order of magnitude.

comments to webmaster