All internal Seminars  Recent internal seminars...

More information on internal seminars can be required to Claudia Raviolo

Mozart's Laptop: Implications for Creativity in Multimedia Digital Libraries and Beyond

03 October 2016, 15:00 - Location: C-29

Speakers:
David Bainbridge (Professor of Computer Science at the University of Waikato - New Zealand )
Referent:
Vittore Casarosa

If Mozart were alive today, what sorts of musical apps would such an innovative composer use on his laptop? In this seminar I will attempt to answer - at least in part - this question. We will metaphorically drop in on Wolfgang composing at home in the morning, at an orchestra rehearsal in the afternoon, and find him unwinding in the evening playing a spot of the new game Piano Hero which is (in my fictional narrative) all the rage in the Viennese coffee shops! From a pedagogical perspective, these three scenarios are chosen because they cover the main forms of digital music representation: audio, sheet music, and symbolic notation. In each case I will demonstrate software prototypes that combines digital music library and music information retrieval research to provide novel forms of access and management of musical digital content. I will then broaden the discussion and relate the work to other forms of media, and (going beyond this) contemplate whether the presented research fits the established definition of a digital library, or if it is perhaps time to repurpose traditional ideas about the structure and capabilities of digital libraries, or even revisit what we define as a digital library.
Professor Bainbridge is Director of the New Zealand Digital Library Research Project

Probabilistic PCTL*: The deductive way

03 October 2016, 10:00 - Location: C-29

Speakers:
Luis Ferrer Fioriti (Dependable Systems and Software group, Saarland University, Saarbreucken, Germany)
Referent:
Mieke Massink

Complex probabilistic temporal behaviours need to be guaranteed in robotics and various other control domains, as well as in the context of families of randomized protocols. At its core, this entails checking infinite-state probabilistic systems with respect to quantitative properties specified in probabilistic temporal logics. Model checking methods are not directly applicable to infinite-state systems, and techniques for infinite-state probabilistic systems are limited in terms of the specifications they can handle.

We present a deductive approach to the verification of countable-state systems against properties specified in probabilistic CTL*, on models featuring both nondeterministic and probabilistic choices. The deductive proof system we propose lifts the classical proof system by Kesten and Pnueli to the probabilistic setting. However, the soundness arguments are completely distinct and go via the theory of martingales. Completeness results for the weakly finite case illustrate the theoretical effectiveness of our approach.

This is joint work with Rayna Dimitrova, Holger Hermanns and Rupak Majumdar.

A Framework for Evaluating and Validating Energy-saving Cyber-Physical Systems in the Railway Domain through Stochastic Activity Network and Stochastic Hybrid Automata

03 October 2016, 11:00 - Location: C-29

Speakers:
Davide Basile
Referent:
Mieke Massink

Cautious usage of energy resources is gaining great attention nowadays, both from an environmental and economical point of view. Therefore, studies devoted to analyse and predict energy consumption in a variety of application sectors are becoming increasingly important, especially in combination with other non-functional properties, such as reliability, safety and availability.

The talk will focus on energy consumption strategies in the railway sector, addressing in particular rail road switches through which trains are guided from one track to another. Given the criticality of their task, the temperature of these devices needs to be kept above certain levels to assure their correct functioning.

By applying a stochastic model-based approach, we analyse a family of energy consumption strategies based on thresholds to trigger the activation/deactivation of energy supply. The goal is to offer an assessment framework through which appropriate tuning of threshold-based energy supply solutions can be achieved, so to select the most appropriate one, resulting in a good compromise between energy consumption and reliability level.

In particular, Stochastic Activity Networks and Stochastic Hybrid Automata models of rail road switch heaters will be evaluated with, respectively, the Moebius and Uppaal SMC tools.

Loving Vincent: Guiding Painters through 64.000 frames... and other unrelated work

20 September 2016, 14:00 - Location: C-40

Speakers:
Francho Melendez (University of Wroclaw)
Referent:
Francesco Banterle

In this presentation, Prof. Melendez will present some of his earlier works in photogrammetry and 3D reconstruction of buildings. Then, a recent collaboration in the movie "Loving Vincent", which is the first feature length oil-painted animation movie, will be presented.

On the processing of facial blood concentration and saturation maps from hyperspectral data (ISTI Grants for Young Mobility seminar series)

07 September 2016, 11:00 - Location: C-29

Referent:
Andrea Esuli

Hyperspectral data, i.e. images collected over tens of narrow contiguous light spectrum intervals represent a natural choice for expanding face recognition image analysis, especially since it may provide information beyond the normal human sensing. Non-invasive and real-time facial mapping of blood concentration and saturation levels could provide useful indicators of the healthy status of individuals with respect to cardio-metabolic risk factors (e.g., hypercholesterolemia, hyperglycaemia, endothelial dysfunction), which are among the leading causes of mortality worldwide. In this seminar, we show how concentration and saturation maps can be calculated by the selection of the lowest and meaningful subset of spectral bands. This would lessen costs and power consumptions for hyperspectral imaging hardware platforms, which need of fast computers and large data storage capacities for analyzing hyperspectral data. Also, we show an application of blood concentration maps, demonstrating their effectiveness. Finally, we discuss problems concerning the generation of facial saturation maps and we provide a starting point for post-processing and validating such noisy images.

Large Scale Data Analytics: Challenges, and the role of Stratified Data Placement

16 June 2016, 15:00 - Location: C-29

Speakers:
Srinivasan Parthasarathy (The Ohio State University)
Referent:
Raffaele Perego

With the increasing popularity of XML data stores, social networks and Web 2.0 and 3.0 applications, complex data formats, such as trees and graphs, are becoming ubiquitous. Managing and processing such large and complex data stores, on modern computational eco-systems, to realize actionable information efficiently, is daunting. In this talk I will begin with discussing some of these challenges. Subsequently I will discuss a critical element at the heart of this challenge relates to the placement, storage and access of such tera- and peta- scale data. In this work we develop a novel distributed framework to ease the burden on the programmer and propose an agile and intelligent placement service layer as a flexible yet unified means to address this challenge. Central to our framework is the notion of stratification which seeks to initially group structurally (or semantically) similar entities into strata. Subsequently strata are partitioned within this eco- system according to the needs of the application to maximize locality, balance load, minimize data skew or even take into account energy consumption. Results on several real-world applications validate the efficacy and efficiency of our approach.

Bio: Srinivasan Parthasarathy is a Professor of Computer Science at Ohio State. He directs the data mining research lab and co-directs the undergraduate major in data analytics -- a first-of-its-kind effort in the US. His work has received eight best paper awards or similar honors from leading conferences in Data Mining, Database Systems and Network Science. He has received the Ameritech Faculty Fellowship; an NSF CAREER award; a DOE ECPI award; and numerous research awards from industry (e.g. Google, Microsoft, IBM) for his work in these areas. An active area of interest currently is in the role of network analysis and data mining in modern emergency response systems that couple both social (citizen) sensing and physical sensing modalities. He also serves as the steering committee chair for the SIAM Data Mining conference series and sits on the editorial board of several leading journals in the field.

Sound and Music Computing for Cultural Heritage

26 May 2016, 14:30 - Location: A-32

Speakers:
Federico Avanzini, Giovanni De Poli (Sound and Music Computing Group - Università di Padova)
Referent:
Matteo Dellepiane

Il 26 Maggio vi sarà la visita del Sound and Music Computing Group, Università di Padova. Nell’ambito della visita (dove saranno presentate agli studenti visitatori le attività di ricerca del Visual Computing Lab) vi sarà un breve seminario relativo alle applicazioni di sound and music all’ambito del Cultural Heritage.

Il tema del seminario sarà il seguente: Sound and Music Computing (SMC) is a research field with an intrinsic vocation to multidisciplinarity and applications to cultural heritage. In this talk we present related research directions currently pursued by the CSC-SMC group at the University of Padova. We first discuss the problem of digital philology applied to the field of preservation of audio archives, focusing on automatic metadata extraction (e.g., video information of audio tape) and on re-creation of interactive multimedia installations. Secondly, we discuss new means of interaction with cultural heritage, specifically through interactive museum installations aimed at increasing engagement of visitors, as well as enforcing new forms of multimodal interactive learning. Ultimately, applying new technologies to interactive museum installations can create stronger consensus and interest for the preservation of cultural heritage.

Dealing with incompleteness in automata-based model checking

26 May 2016, 15:00 - Location: C-29

Speakers:
Paola Spoletini (Professor at Kennesaw State University, GA, USA )
Referent:
Alessio Ferrari

A software specification is often the result of an iterative process that transforms an initial incomplete model through refinement decisions. A model is *incomplete* because decisions about certain functionalities are postponed to a later stage and perhaps even delegated to third parties. A delayed functionality may be later detailed, and alternative solutions are often explored to evaluate tradeoffs.
*Model checking* has been proposed as a technique to verify that the model of the system under development is compliant with its requirements. However, most classical model checking algorithms assume that a complete model and fully specified requirements are given: they do not support incompleteness. Furthermore, as changes are made, by either adding a part that was previously not specified or by revisiting a previous design decision, the whole model of the system must be verified from scratch. A verification- driven design process would instead benefit from the ability to apply formal verification at any stage, hence also to incomplete models. After any change, it is desirable that only a minimal part of the model–the portion affected by the change, called replacement–is analyzed, thus reducing the verification effort.
In our work we propose an approach that extends the classical automata-based model checking procedure for LTL properties to deal with incompleteness. *The proposed model checking approach evaluates if* *a property holds or not for an incomplete model.* In addition, when satisfaction of the property depends on the incomplete parts–indicating delayed functionalities–it computes a constraint that must be satisfied by the replacement to ensure satisfaction of the property. The constraint is then verified on the replacement.

Seminari YRA 2015, seconda parte

25 May 2016, 14:15 - Location: C-29

Referent:
Matteo Dellepiane

Secondo e ultimo appuntamento con i seminari dei vincitori del premio Young Researcher Award 2015, che presenteranno i loro attuali e futuri argomenti di ricerca. I tre seminari saranno tenuti da:

Michele Girolami - Social-awareness reshapes the context: WiFi probing, Bluetooth beaconing and protocol’s handshaking are all networking activities issued regularly by most of the pocket devices we brig with us. Such activities require disseminating digital crumbs in the network that are often discarded. Nevertheless, such crumbs hide a great potentiality for revealing interesting aspects of the human behavior such as their mobility or useful insights of the social context. I will first introduce the Mobile Social Networks (MSN) which represent the background of my current research. Then, I will discuss how understanding the social context can benefit three application problems we studied, namely detecting ties through sensing information, discovering resources in MSN and how to combine a participatory with the opportunistic crowd sensing.

Riccardo Guidotti - Carpooling: Challenges & Overview: Carpooling, i.e. the sharing of vehicles to reach common destinations, is often performed to reduce costs and pollution. Experience shows that it is difficult to boost the adoption of carpooling to significant levels because it is not seen as an everyday mean of transport and also because of people reticence in sharing the car with strangers. We use network analytics to analyze the potential impact of carpooling as a collective phenomenon emerging from people's mobility to reduce the number of people driving alone. Then we improve our approach by means of constraint programming and machine learning through the introduction of demographic features and users feedbacks. Moreover, we show how, by integrating private transport routines into a public transit network, it is possible, in principle, to derive better advises. Finally, we develop a system that optimizes carpooling not only by minimizing the number of cars needed at city level, but also by maximizing the enjoyability of people sharing a trip.

Maria Antonietta Pascali - 3D face morphology and body fat: preliminary results from the SEMEOTICONS project: The central idea of the Semeoticons project is to analyse automatically and unobtrusively the face as a major indicator of individual's wellbeing status for the prevention of the cardio-metabolic disease. To this aim, heterogeneous data are acquired through a smart mirror (Wize Mirror), and processed. Among the acquired data, here we focus on the geometric analysis of 3D facial data, extracted through a low cost scanner, in order to infer as much information as possible about individual's body fat related parameters (such as body weight, body mass index, waist circumference, neck circumference) and their variations, being the excess body fat one of the most important factors of the cardio-metabolic risk.

Seminari YRA 2015, prima parte

12 May 2016, 14:15 - Location: C-29

Referent:
Matteo Dellepiane

Primo appuntamento con i seminari dei vincitori del premio Young Researcher Award 2015, che presenteranno i loro attuali e futuri argomenti di ricerca. I tre seminari saranno tenuti da:

Luigi Malomo - Elastic textures for additive fabrication: One of the greatest limitations of 3D printing is that the objects we print are made of a single material, usually cold hard plastic. In this talk we will show how, using a single material 3D printer, it is possibile to fabricate structures with custom elasticity and how to exploit this feature to design objects with a prescribed mechanical behavior.

Filippo Palumbo - Ambient Assisted Living: The general goal of Ambient Assisted Living (AAL) solutions is to apply ambient intelligence technologies to enable people with specific demands, e.g. with disabilities or elderly, to live longer in their preferred environment. The seminar will show the key results achieved in the last year as researcher at ISTI-CNR. In particular, the progresses made in the two major research problems that still prevent the spreading of AAL solutions in real environments: (i) the need of a common medium to transmit the sensory data and the information produced by algorithms and (ii) the unobtrusiveness of context-aware applications in terms of both placement of devices and period of observations.

Luca Pappalardo - The harsh rule of the goals: data-driven performance indicators for football teams: Sports analytics in general, and football (soccer in USA) analytics in particular, have evolved in recent years in an amazing way, thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game. In this seminar we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance. From observational data of football games we extracted a set of pass-based performance indicators and summarized them in the H indicator. We observed a strong correlation among the proposed indicator and the success of a team, and therefore performed a simulation on the four major European championships. We found that the final rankings in the simulated championships are very close to the actual rankings in the real championships, and show that teams with high ranking error show extreme values of a defense/attack efficiency measure, the Pezzali score. Our results are surprising given the simplicity of the proposed indicators, suggesting that a complex systems’ view on football data has the potential of revealing hidden patterns and behavior of superior quality.

Exploring the Critical and Cultural Dimensions of SoBigData: From Empowering Data Citizens to Persona Non Data

05 April 2016, 14:00 - Location: C-29

Speakers:
Mark Coté (Programme Director, Digital Culture and Society, King's College London)
Referent:
Fosca Giannotti

This talk will present insights and innovations from a series of recent and ongoing research projects at King’s College London. Beyond the analysis and modelling of SoBigData, our highly interdisciplinary research focuses on the processes of datafication, particularly the social, cultural, political and economic dimensions. Our projects hypothesise SoBigData as at the new key matrix of power-knowledge relations and explore how we can increase collective agency through greater access to and critical and creative use of the data we generate.
Here is a short bio:
Dr. Mark Coté is a leading researcher in the social, cultural, political and economic dimensions of big data. He has received numerous research grants from UK and European research councilsas both PI and CI, partnering with the Open Data Institute, British Library, the University of Aarhus, and many others. He has presented his research at the Royal Society, British Academy, Transmediale, the International Communication Association, and the Berlin Institut for Auslandsbeziehungen, among others. His work has been published widely across leading journals including Big Data &Society.

The illusion that comes with a price: privacy on the web

25 February 2016, 09:30 - Location: C-29

Speakers:
Gábor György Gulyás (Privatics team, Inria)
Referent:
Salvatore Ruggieri

We are used to the fact that things are free on the web. Since the Snowden case, more people know that even free services have their price. In spite of this, most of us still don't take any countermeasures to protect their privacy. But is it hard to protect our online activities? Or is this battle already lost? The first part of this talk will be an overview of the history of web privacy, how and why our online presence is exploited in exchange of getting some services for free. In the second part, we will discuss fingerprinting, a tracking method in details, and see how the underlying problem of identification relates to other platforms such as smartphones or location privacy.

Bio: Gábor György Gulyás is a postdoctoral researcher in the Privatics team, at Inria. He currently works at the intersection of data privacy and machine learning, but he is also interested in some specific areas, such as web privacy, de-anonymization and privacy in social networks. He received his Ph.D. in 2015 from the Budapest University of Technology and Economics (BME), Hungary, where he was working on re-identification in social networks in the CrySyS Lab. He received the degree of MSc in a specialization on computer security at BME, Hungary.

Spatial, statistical and morphological 3D shape analysis for automatic and interactive computer graphics applications

24 February 2016, 14:30 - Location: C-29

Speakers:
Tamy Boubekeur (LTCI, CNRS, Telecom ParisTech, Paris-Saclay University)
Referent:
Paolo Cignoni

Shape analysis takes many forms in computer graphics, and one can easily think there are as many analysis primitives as applications needing them. However, for certain classes of shape analysis methods, a single analysis primitive may be used for a large number of different application scenarios. During this presentation, I will summarize our recent work on shape analysis for both automatic processing and interactive editing of 3D shapes in computer graphics. In particular, I will focus on three classes of analysis methods and present the Sphere-Mesh approximation model with its application the freeform deformation, the SimSelect system which exploits self-similarity in shapes to reduce repetitive editing tasks such as selection and the Point Morphology framework for automatic shape and topology filtering of point clouds.

Relative scale estimation and 3D registration of multi-modal geometry using Growing Least Squares

27 January 2016, 11:00 - Location: C-29

Speakers:
Matteo Dellepiane
Referent:
Matteo Dellepiane

The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g. sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e. acquired by devices with different properties (e.g. resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage.

Possible uses of Bluetooth Low Energy in AmI applications (YRA seminar)

18 December 2015, 17:30 - Location: C-29

Speakers:
Filippo Palumbo
Referent:
Alberto Gotta

Ambient Intelligence is the vision of a future in which environments support the people inhabiting them. In order to realise it, environments should pervasively but unobrusively equipped with sensors offering data to intelligent systems providing useful services like localization and navigation, environmental personalization, health and well-being monitoring.Each of these services is built upon different technologies and uses different signal types, usually based on specialized and non-interoperable hardware. The presence of Bluethooth as a common wireless technology standard for exchanging data could be a possible solution. However, in the past years, practical issues mostly related to its lengthy scan procedure and power related issues, have limited the use of Bluetooth in such applications. The recent introduction of the Bluetooth 4.0 specification has potentially addressed these problems by means of the Bluetooth Low Energy (BLE, also known as BluetoothSmart) subsystem. BLE devices are small, inexpensive and designed to run on batteries for many months. It is expected that many buildings will contain a high density of BLE devices in the near future. In this talk we describe the basics and the new features of BLE, some application scenarios, and a possible solution for the indoor localization issue.

Big Data Analytics: systems and algorithmic challenges

03 December 2015, 15:00 - Location: C-29

Speakers:
prof. Pietro Michiardi (EURECOM)
Referent:
Patrizio Dazzi

In this talk, I will overview a selection of topics covering both systems and algorithmic aspects of large-scale data analysis. At the system level, we will overview current research on topics ranging from resource management (e.g. scheduling and elasticity) to multi-query optimization (also known as work-sharing), and indicate current trends in large-scale distributed systems design and architectures. At the algorithmic level, we will focus on machine learning and the intricacies of designing and analyzing scalable algorithms for both bounded and unbounded data, and discuss real-life applications and lessons learned.

Why Open Access? A researcher's perspective

23 October 2015, 14:30 - Location: A-27

Speakers:
Jean-Claude Guédon (Université de Montréal (Canada))
Referent:
Paolo Manghi

Open Access, as its name indicates, is about access (and re­use) of validated scientific results (and, more recently, about data as well); but it is also about regaining a clear perspective on the needs scientists have in terms of scientific communication and its control. By following some of the salient events in the recent history of scientific communication and of the Open Access movement, this talk will try to demonstrate the strategic importance of Open Access in the very dynamics of science and the quality of the knowledge produced.

Designing structures for additive fabrication

07 October 2015, 11:00 - Location: C-29

Speakers:
Denis Zorin (California Institute of Technology, Pasadena)
Referent:
Nico Pietroni

Additive fabrication (3D printing) presents a range of unique challenges and opportunities for computational design. On of the distinctive features of additive fabrication is effectively free complexity, making it possible to use complex small-scale structures to achieve various design goals. However designing such structures manually is difficult or impossible, and automated methods are needed. Another feature of additive fabrication is a short design-to-fabrication pipeline, enabling many people without professional modeling and engineering experience to create unique and customized products. Yet most design software do not have accessible and intuitive tools helping users to produce designs that are manufacturable and have expected physical behavior. In this talk, I will describe our work aiming to develop methods addressing some aspects of these problems.

BIO Prof. Denis Zorin: Denis Zorin is a Professor of Computer Science and Mathematics and the Chair of the Computer Science Department at the Courant Institute of Mathematical Sciences at New York University. His areas of research include geometric modeling and processing, physically-based simulation and numerical methods for scientific computing. He received a PhD in Computer Science from the California Institute of Technology; before joining the faculty at NYU, he was a postdoctoral researcher at Stanford University. He was a Sloan Foundation Fellow, received the NSF CAREER award, and several IBM Faculty Partnership Awards. He is a co-recipient of the ACM Gordon Bell Prize. His former students and postdocs went on to become faculty members at a number of leading universities

Data-Driven Interactive Quadrangulation

22 July 2015, 15:30 - Location: C-40

Speakers:
Giorgio Marcias
Referent:
Nico Pietroni

We propose an interactive quadrangulation method based on a large collection of patterns that are learned from models manually designed by artists. The patterns are distilled into compact quadrangulation rules and stored in a database. At run-time, the user draws strokes to define patches and desired edge flows, and the system queries the database to extract fitting patterns to tessellate the sketches interiors. The quadrangulation patterns are general and can be applied to tessellate large regions while controlling the positions of the singularities and the edge flow. We demonstrate the effectiveness of our algorithm through a series of live retopology sessions and an informal user study with three professional artists.

An overview of the NeMIS IT facilities

07 July 2015, 14:00 - Location: A-27

Speakers:
Andrea Dell'Amico
Referent:
Leonardo Candela

This seminar gives an overview of the NeMIS IT facilities. In particular, it gives a detailed description of the core systems exploited to operate the NeMIS services. Approaches aiming at scalability, redundancy and automated provisioning will be presented. For provisioning systems Ansible will be presented in detail by usage examples describing how it can be exploited to automate most of our infrastructure.

Real-time Hand Motion Tracking from a Single RGBD Camera

29 June 2015, 11:00 - Location: C-29

Speakers:
Andrea Tagliasacchi (Eacute;cole Polytechnique Fédérale de Lausanne)
Referent:
Paolo Cignoni

We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.

Biography: Andrea Tagliasacchi is an assistant professor at University of Victoria (BC, Canada). Before joining UVic, he was a postdoctoral researcher in the Graphics and Geometry Laboratory at EPFL. Andrea obtained his M.Sc. (cum laude, gold medal in ENCS) in digital signal processing from Politecnico di Milano. He completed his Ph.D. in 2013 at Simon Fraser University as an NSERC Alexander Graham Bell scholar. His doctoral research at SFU focused on digital geometry processing (skeletal representations and surface reconstruction). Recently his interests are real-time registration and modeling, with applications to augmented reality and human-machine interaction.

Identifying interaction workflows with complex artefacts

21 May 2015, 11:30 - Location: C-29

Speakers:
Markel Vigo (The University of Manchester, UK )
Referent:
Fabio Paternò

I argue that interaction with many forms of data constitutes 'complex' interaction, characterised by multitasking situations, highly-specialised domain, high cognitive demand, and large size - and often also inhibited by poorly designed user interfaces that fail to support effective understanding and manipulation. Conventional usability methods are of limited applicability to the study of these complex interactions. In this talk I demonstrate novel approaches to modelling complex interactions using event and eye movement data in the context of ontology authoring. I will discuss how the lessons learned can be applied to model-driven evaluation and generation of user interfaces.

HCI in a World of DataScience

21 May 2015, 12:00 - Location: C-29

Speakers:
Simon Harper (The University of Manchester, UK)
Referent:
Fabio Paternò

This talk will try to set out where Human Computer Interaction sits in a world rapidly moving to Data Science. Data Science, is seen as a home for statisticians, machine learners, data warehousers, and the like; but it seems to have nowhere for the human component. I suggest HCI is much more useful than many computer scientists creating tools and analytics would think. How we understand big and broad data can be augmented by rich HCI techniques, how we understand scientist users, how we communicate complex data outcomes to the public (allowing them some form of involvement) are all directly within the HCI domain. To have impact data needs a narrative, HCI can give it one.
Slides at: http://sharpic.github.io/hci-ds

Metric Indexes based on Recursive Voronoi Partitioning

19 May 2015, 15:00 - Location: C-29

Speakers:
David Novak (Faculty of Informatics, Masaryk University, Brno, Czech Republic )
Referent:
Fabrizio Falchi

In this talk, we target the problem of search efficiency vs. answer quality of approximate metric-based similarity search. We especially focus on techniques based on recursive Voronoi-like partitioning or, from another perspective, on pivot permutations. These techniques use sets of reference objects (anchors/pivots) to partition the metric space into cells of close data items. Instead of refining the search space by enlarging the anchor set of a single index, we propose to divide a large pivot set into several subsets and build multiple indexes with independent space partitioning; at query time, the overall search costs are also divided among the separate indexes. Our thorough experimental study on three different real datasets uncovers drawbacks of excessive increase of a single pivot set size—such partitioning refinement can be counterproductive beyond a certain number of pivots. Our approach overcomes the root causes of this limitation and increases the answer quality while preserving the search costs. Further, we address the question of robustness of the answer quality, which can be significantly improved by utilization of independent anchor spaces.

ISTI Young Researcher Award 2015 - Young Researchers Day

29 April 2015, 11:00 - Location: C-29

Referent:
Leonardo Candela

This is a set of 3 seminars from the recipients of the ISTI YRA 2015 - Category: Young Researchers

Title: Unsupervised Induction of Relations within a Reconstruction Framework
Speaker: Diego Marcheggiani (ISTI-NeMIS)
Abstract: Relation extraction is the task that aims at extract structured information between elements from unstructured sources such as natural language text. This comes in help when one wants to create a knowledge graph in which the nodes are entities (person, organization, event), and the edges that connects the nodes are the relations between entities (works_for, attend, CEO_of). In this work we approached the task of relation extraction using a novel, unsupervised model, to induce relations starting from a raw text. The method primarily consists in two components, (i) an encoding component which predicts the relation between two found entities given a rich set of syntactic and semantic features, and (ii) a reconstruction component expressed as a tensor factorization model which relies on the relations predicted by (i) to predict back the argument fillers (as word embeddings). The entire model can be seen as a neural autoencoder that instead of learning the model parameters from labeled data, learns the best label that allows the reconstruction component to reconstruct the input.

Title: “It’s a long way to the top" Predicting Success via Innovators
Speaker: Giulio Rossetti (ISTI-KDD)
Abstract: Can we predict whether a new product, in the early stage of its life cycle, will later become a success, i.e., a widely adopted innovation? Based on large real datasets of adoption logs, we discovered that successful innovations, the hits, are often adopted, during their early stages, by a peculiar category of innovators, the hitters, that exhibit a propensity to adopt goods that will become hits. This intuition is the basis of a predictive model, Hits&Flops, able to identify the hitters and use them as signals to forecast success. The method, evaluated on three large datasets from real markets, exhibits high accuracy and highlights the precious segment of hitters, the customers/users that anticipate success.

Title: Data Revolution: Unlock the Power of Big Data
Speaker: Lorenzo Gabrielli (ISTI-KDD)
Abstract: Big Data allow us to understand and describe the individual and collective behavior at an unprecedented speed, scale and detail. An intriguing open question is whether measurements made on large datasets recording human activities, can be used for official statistics. Can we identify residents, commuters and visitors moving in a particular territory? Can we monitor and predict the economic development of a territory just by observing the behavior of its inhabitants? In my research, I am designing a data-driven analytical framework that uses nation-wide to build a sort of Sociometer: an analytical tool to monitor and predict the economic situation of a territory In the presentation I will show two applications 1) useful to improve the statistical production of presences within a territory and 2) how to measure the well-being of society through mobile phone data.

Current Research in Human-generated Data at the Big Data Analytics Institute​

21 April 2015, 14:00 - Location: C-29

Speakers:
Stan Matwin (Dalhousie University, Halifax, Canad)
Referent:
Fosca Giannotti

In this presentation we will review current research projects in progress at the Institute for Big Data Analytics at Dalhousie University, Halifax, Canada. While we are involved in a variety of applied projects, their common denominator is that the data is derived, directly or indirectly, from different human activities: shopping, transporting goods, moving around cities, and discussing issues. Projects include work with movement of ships on oceans, with human mobility data from the use of mobile phones and WiFi hotspots, with the analysis of postings in on-line discussion, and with Point of Sale data for specific type of goods. We will look at the algorithmic issues related to the ship tracking, data privacy issues related to human mobility, and data integration issues related to shopping data.

Stan Matwin is a Professor and Canada Research Chair in the Faculty of Computer Science at Dalhousie University, where he directs the Institute for Big Data Analytics. He is also a Professor at the Institute of Computer Science, Polish Academy of Sciences. His research interests are in text analytics, data mining, as well as in data privacy. Author and co-author of more than 250 research papers and articles, Stan is a former President of the Canadian Artificial Intelligence Society, a member of the Scientific Council of the Polish Artificial Intelligence Society, and a member of Association Francaise pour l’Intelligence Artificielle.

Ambiguity as a Resource to Disclose Tacit Knowledge (seminario GYM)

20 April 2015, 11:00 - Location: C-40

Speakers:
Alessio Ferrari
Referent:
Andrea Esuli

Interviews are the most common and effective means to perform requirements elicitation and support knowledge transfer between a customer and a requirements analyst. Ambiguity in communication is often perceived as a major obstacle for knowledge transfer, which could lead to unclear and incomplete requirements documents. In this paper, we analyse the role of ambiguity in requirements elicitation interviews. To this end, we have performed a set of customer-analyst interviews to observe how ambiguity occurs during requirements elicitation. From this direct experience, we have observed that ambiguity is a multi-dimensional cognitive phenomenon with a dominant pragmatic facet, and we have defined a phenomenological framework to describe the different types of ambiguity in interviews. We have also discovered that, rather than an obstacle, the occurrence of an ambiguity is often a resource for discovering tacit knowledge. Starting from this observation, we have envisioned the further steps needed to exploit these findings.

Il seminario i tiene alla conclusione di un periodo di ricerca all'estero finanziato da una borsa GYM.

On the use of TCP on Random Access Satellite Links (seminario GYM)

01 April 2015, 15:00 - Location: C-29

Speakers:
Felice Bacco
Referent:
Felice Bacco

Grants for Young Mobility (GYM) seminar.

Applied geometry processing: results and future research

06 March 2015, 10:00 - Location: I-53

Speakers:
Nico Pietroni
Referent:
Nico Pietroni

The main scope of industrial prototyping consists in creating a tangible representation of an arbitrary complex object. This activity usually starts with the design of a digital 3D model, often using Computer Aided Design tools. The ability to manipulate, analyse and optimize a digital shape is fundamental. This can be addressed either interactively, by providing semi-assisted intuitive instruments, or automatically. A new generation of geometry processing algorithms has been designed to meet the requirements of specific applicative domains. Thanks to new capabilities, significant innovations have been achieved in industrial design, architecture and manufacturing.

In this talk I'll show recent results we achieved in three research areas where geometry processing has proven to be particularly effective: remeshing for CAD and entertainment industries, digital fabrication, and architectural geometry.

IceSL: a GPU slicer for 3D printing

04 March 2015, 14:00 - Location: I-53

Speakers:
Jonas Martinez Bayona ( INRIA, Nancy - ALICE team)
Referent:
Paolo Cignoni

IceSL is a GPU accelerated modeler an slicer for 3D printing developed by our group.

IceSL uses recent GPU algorithms to speed-up the visualization and slicing process, and avoids the expensive mesh computations that most other softwares perform. IceSL can be freely downloaded from http://www.loria.fr/~slefebvr/icesl/.

I will present also a recent approach to offset solids in the context of fabrication. We define the offset solid as a sequence of morphological operations along line segments and propose two complementary implementations.

Grade, a sustainable repository of Linked Data

25 February 2015, 11:00 - Location: A-32

Speakers:
Fabio Simeoni ( FAO)
Referent:
Pasquale Pagano

Come and meet Grade, our latest and coolest tool in FAO to integrate data from local and remote sources into a sustainable repository of Linked Data. Why sustainable? Grade puts data management back into the hands of Data Managers:

  • Linked Data Managers can throw away their Java books and Linux how-tos; with Grade, management tasks and dissemination services can be defined and tested interactively;
  • General Data Managers need no longer depend on the experts; with Grade, they can navigate the data, monitor its usage, and even run maintenance tasks with little or no knowledge of Semantic Technologies:

We show how we use Grade to turn the Fishery Linked Open Data (FLOD) into a production-class repository, where generating, updating, and disseminating reference data alignments and mappings is just a few clicks away. Joins us to hear the full story behind Grade, to see it in action, to learn about its short-term applications, and to discuss our vision for its longer-term future(or just come along if you're a Tech Head and have heard about Dart and Polymer and want to see a full-fledged application built on HTML5 and Web Components).

Analysis Strategies for Configurable Systems

19 February 2015, 10:30 - Location: C-40

Speakers:
Alexander von Rhein (University of Passau, Germania)
Referent:
Stefania Gnesi

The advent of variability management and generator technology enables users to derive individual system variants from a given configurable system just based on a selection of desired configuration options. To cope with the possibly huge configuration space, researchers are developing analysis approaches that follow different strategies to incorporate (static) variability.

One popular strategy, often applied in practice, is to use sampling (i.e., analyzing just a few variants). While sampling reduces the analysis effort significantly, the information obtained is necessarily incomplete.

A second strategy is to look at the variable parts of the system and analyze each part seperately (feature-based strategy). From these bits partial results one can often infer information on all variants of the entire system.

As a third strategy, researchers have begun to develop variability-aware analyses that analyze the code base of the configurable system directly (rather than the system variants or parts of the system), exploiting the similarities among individual variants to reduce analysis effort.

We designed a framework which used these three strategies as dimensions. We analyze single apps (corresponding to features) and extract inter-app control-flow information. Based on the condensed information of multiple apps we implement variability-aware taint propagation to determine which app combinations can leak private data to untrusted sites.

Cause learning from examples--challenges and opportunities

12 February 2015, 10:30 - Location: I-07a

Speakers:
Junying Zhang (Xidian University, Dept of Computer Science, Xidian, China)
Referent:
Ercan Kuruoglu

Cause learning from examples is an urgent call and a pathway for understanding the most insights of some scientific problems. A central question is to learn substantial causes which must be a good match to ground truth. Though an anathema is no definition on what to learn: causes, we notice that causes objectively exist, independent to datasets and techniques. The former is a vital clue for learning, while the latter requires that the techniques adopted be hypothesis-free. Learning causes are of great challenges in what to learn, how to learn and how to evaluate what is learnt. The challenges are wonderful opportunities to promote learning from an engineering to a science.

Studying properties of a random variable in dual domains

10 February 2015, 11:00 - Location: C-40

Speakers:
Junying Zhang (Xidian University, Dept of Computer Science, Xidian, China)
Referent:
Ercan Kuruoglu

Studying fundamental properties of a random variable is a key step towards its applications. Here we provide a paradigm of study in density domain, in characteristic domain and transfer between the domains, where density domain study is based on probability density function (pdf) and characteristic study is based on characteristic function amplitude (cfa) of the variable.

Novel quantities are introduced: dentropy and centropy. The maximum entropy principle with unknowns embraced in constraints finds brilliant comings when applied to either domain with the quantities: stable/GGD variable is the maximum dentropy/centropy solution. Based on the results, stable and GGD reference of a variable are defined and non-stability and non-GGD measure are presented.

Property duality is found for variables with a same representation form for one in density domain and for another in characteristic domain. The dualities are on statistics, references, measures, projections/directions, mixture models and so forth. This makes it promising to study a variable from its dual. The paradigm stipulates a new kind of language in probability theory and practice for insights of a variable.

Distributed Word Embeddings on Data Streams

03 February 2015, 10:30 - Location: A-27

Speakers:
Giacomo Berardi
Referent:
Andrea Esuli

Word representation via numerical vectors (word embeddings) is a central topic in natural language processing. The recent approaches for computing word embeddings from text corpora (e.g., word2vec) have gained popularity for their efficiency in handling huge data sets and for the quality of the word representations. The concept of representing items according to the context in which they appear can be extended to different scenarios beyond natural language. In other applications data can be very different from text, so the shape and the number of items to represent. In this work we develop a word embedding application with two goals in mind: (i) we want to learn the embeddings from a data stream, thus we have to tackle the time dimension and the possibly infinite size of the data; (ii) we want to scale and distribute the whole process on multiple machines. We show the architecture and some preliminary results of a word2vec implementation following our constraints. Results are promising in terms of efficacy and future developments of the application.

Explaining mobile application recommendation

03 February 2015, 11:00 - Location: A-27

Speakers:
Cristina Muntean
Referent:
Andrea Esuli

As previously demonstrated, given the right incentives and explanations users are more likely to adopt a recommended product (application, item etc.). In this work we wish to determine what are the most influential factors in application adoption for mobile, when a user is faced with several recommendations. In order to do so, we exploit the mobile log retrieved by Aviate, over a period of 3 months, in confirming or infirming the hypothesis regarding application adoption. We investigate features like brand popularity, the frequency of use of an application, the popularity of an application in the certain area - near user's workplace, home or city, if it belongs to the categories the users is most interested in or if that application is trending. When explaining a certain recommendation we have certain limitations, we cannot present the user with all motivations available, so deriving the most influential ones is essential and can directly influence the adoption.

Unsupervised Induction of Relation within a Reconstruction Minimization Framework

03 February 2015, 11:30 - Location: A-27

Speakers:
Diego Marcheggiani
Referent:
Andrea Esuli

Relation extraction is the task that aims to extract structured information between elements from unstructured sources such as natural language text. This comes in help when one wants to create a knowledge graph in which the nodes are entities (person, organization, event), and the edges that connects the nodes are the relations between entities (works_for, attend, CEO_of). In this work we approached the task of relation extraction using a novel, unsupervised model, to induce relations starting from a raw text. The method primarily consists of two components, (i) an encoding component which predicts the relation between two found entities given a rich set of syntactic and semantic features, and (ii) a reconstruction component expressed as a tensor factorization model which relies on the relations predicted by (i) to predict back the argument fillers (as word embeddings). The entire model can be seen as an autoencoder that instead of learning the model parameters from labeled data, learns the best label that allows the reconstruction component to reconstruct the input.

Conquering the Combinatorial Explosion: Analyzing Variable Software

11 December 2014, 11:00 - Location: C-40

Speakers:
Sven Apel (Chair of Software Product Lines, Department of Informatics and Mathematics, University of Passau, Germany )
Referent:
Maurice ter Beek

Variability is everywhere, and software is no exception: It is difficult to imagine any kind of non-trivial software system that is not variable or configurable. Beside immediate benefits, such as mass customization, variability introduces an additional dimension of complexity that poses new challenges for software engineering.

I will provide an overview of recent work on efficiently analyzing and understanding variable software systems. In particular, I will categorize and compare different strategies to incorporate variability during analysis and to conquer the combinatorial explosion of the configuration space. Furthermore, I will report on success stories of applying different kinds of analyses to variable software as well as on potential pitfalls. At the heart of the problem, I will relate issues of developing and analyzing variable software systems to the infamous feature-interaction problem and its importance for further developments in this area.

From the Theory to the Product: Building a Music Locker

09 December 2014, 15:00 - Location: A-30

Speakers:
Edgar Chavez (Centro de Investigación Científica y de Educación Superior de Ensenada, CICESE (Mexico))
Referent:
Giuseppe Amato

In this talk we explain the concept of a music locker, the services offered and the challenges for its construction. We show how a simple concept requires substantial advances in algorithms and data structures to compete with great advantage against giant companies with infinite resources in practice.

Data driven small community media and related services for community self-organization

12 November 2014, 10:30 - Location: C-29

Speakers:
Lukács Gergely (Faculty of Information Technology and Bionics Pázmány Péter, Catholic University, Budapest, Hungary)
Referent:
Mirco Nanni

The primary goal of this ongoing project is to strengthen communities, especially local communities, through utilizing the potential of smartphones. One key element of the system is audio based community media (low power radio station/small community media framework). Other modules include social carpooling and easy-to-access markets for agricultural and second-hand products.
Data processing is a key element of the system and of the related research. For playlist generation audio and textual data processing and mining are required. For the information services part, spatio-temporal data is of primary importance. The final goal is engineering social networks, where network data plays an important role.

A Quantified Modal Logic for the Specification and Verification of Software Systems

22 October 2014, 11:00 - Location: B-76

Speakers:
Fabio Gadducci (Dipartimento di Informatica - Università di Pisa)
Referent:
Vincenzo Ciancia

Quantified modal logics (possibly extended with fixpoint operators) combine the modal operators with (existential) quantifiers: their intertwining allows for reasoning about the possible behavior of individual components within a software system. We present an extended Kripke semantics for such logics, based on labeled transition systems whose states are algebras and transitions are partial homomorphisms. Open formulae are interpreted over sets of state assignments (families of partial substitutions, associating formula variables to state components). Our proposal allows us to model and reason about the creation and deletion of components, as well as the merging of components, and it avoids the limitations of existing approaches. The talk is rounded up with some considerations about model checking and approximation techniques.

Automatic Scene Understanding in the Underwater Environment

17 October 2014, 11:00 - Location: C-29

Speakers:
Marco Reggiannini
Referent:
Andrea Esuli

A marine survey is typically performed by multi-sensor platforms capturing data (e.g. optical and acoustic) during experimental missions and implementing suitable data analysis algorithms. These implemented procedures endow the platform with the skill of understanding the environment without human supervision.

In this framework a main goal of the R&D activity carried out consists in the recognition, possibly in real-time, of objects located in the sensed environment.

To this aim the different signals coming from sensors typically installed onboard of oceanographic devices, like autonomous underwater vehicles or remotely operated vehicles, need to be integrated with high level machine learning processes, in order to carry out data understanding. Concerning that, some of the most relevant issues involve i) the recognition of meaningful patterns in the signals, ii) the use of computer vision algorithms to reconstruct the 3D morphology of the scene, iii) the creation of maps of the scene and iv) the improvement of the recognition performance by integrating all the available information.

The described topics are borrowed from the oceanic engineering field but represent an appealing tool also for different marine operators. As an example underwater archaeologists are interested in multi-sensor platforms exploitation in order to map and safeguard wrecks and manmade objects, scattered on the oceans' seabeds.

Discovering and Disambiguating Named Entities in Text

06 October 2014, 16:00 - Location: C-29

Speakers:
Johannes Hoffart (Max-Planck-Institut für Informatik)
Referent:
Diego Ceccarelli

Disambiguating named entities in natural language texts maps ambiguous names to canonical entities registered in a knowledge base such as DBpedia, Freebase, or YAGO. Knowing the specific entity is an important asset for several other tasks, e.g. entity-based information retrieval or higher-level information extraction.

In this talk I will cover three aspects of entity disambiguation:

1. Entity disambiguation to Wikipedia-derived knowledge bases. The approach to this problem uses several ingredients: the prior probability of an entity being mentioned, the similarity between the context of the mention in the text and an entity, as well as the coherence among the entities. Using with a fast graph algorithm, the disambiguation is solved using joint inference.

2. Semantic relatedness for entity disambiguation, or how to go beyond Wikipedia.
Extending the disambiguation method, we present a novel and highly efficient measure to compute the semantic coherence between entities based on keyphrases. This measure is especially powerful for long-tail entities in Wikipedia or such knowledge bases that do not interlink their entities like Wikipedia does.

3. Discovering emerging entities, or how to go to the real world. Wikipedia and knowledge bases can never be complete due to the dynamics of the ever-changing world: new companies are formed every day, new songs are composed every minute. To keep up with the real world’s entities, we introduce a method to explicitly model the out-of-knowledge-base entities, enabling a more robust discovery of previously unseen entities.

Speaker bio:
Johannes Hoffart is a PhD student at the Databases and Information Systems group at the Max Planck Institute for Informatics. His current research focus is the linking of unstructured text to structured knowledge bases by disambiguating named entities, as well as the use of entities and knowledge bases in information retrieval tasks.

3D Shape Retrieval with a tree graph representation based on the Autodiffusion function topology

12 September 2014, 11:00 - Location: C-29

Speakers:
Valeria Garro PhD (Università degli Studi di Verona )
Referent:
Fabio Ganovelli

In this seminar I will give a summary of my past and current research and in particular I will focus on my recent work on a shape descriptor for 3D shape retrieval with texture information. I will present a method for shape matching based on a tree representation built upon the scale space analysis of maxima of the Autodiffusion function (ADF). By coupling maxima of the ADF with the related basins of attraction, it is possible to link the information at different scales encoding spatial relationships in a tree structure. Furthermore, texture information can be easily included in the descriptor by adding regional color histograms to the node attributes of the tree. Dedicated graph kernels have been designed to evaluate shape dissimilarity from the obtained representations using both geometric and color information. Experiments performed on the SHREC 2013 non-rigid textured dataset showed promising results.

Adaptive clipping of splats to models with sharp features

23 July 2014, 14:00 - Location: I-07a

Speakers:
Rafael Ivo (Dottorando Universidade Federal do Ceará (Brasil) )
Referent:
Roberto Scopigno

Splat-based models are a good representation because of its absense of topology, making complex modeling operations easier, but keeping the same approximation ratio from triangular meshes. However corners cannot be properly represented by splats without clipping them. We present a new method for clipping splats in models with sharp features. Each splat is an ellipse equipped with a few parameters that allow to define how the ellipse can be clipped against a bidimensional rational Bezier curve and thus it can be used for all those surfaces that show a large number of edge features and different sampling rate around them. The simple and uniform data used to define the clipping curve makes easy the implementation in GPU. We designed and implemented an automatic computation of the clipping curves and a pipeline for sampling a generic surface with splats and render it. In this paper we show how this technique outperforms the previous clipping techniques in precision for objects such as mechanical parts and CAD-like models keeping the rendering speed.

Short Bio: Rafael Ivo e' un dottorando dell'Universidade Federal do Ceará (Fortaleza, Brasil), attualmente studente visitatore di CNR-ISTI (Visual Computing Lab).

The Role of User Location in Personalisation of Information Retrieval Systems

23 July 2014, 15:00 - Location: C-29

Speakers:
Andrey Rikitianskly (Università della Svizzera Italiana)
Referent:
Raffaele Perego

Information Retrieval (IR) systems aim at assisting user in finding information among huge variety of resources available on the Web. While traditional IR systems characterized by "one size fits all" approach provide the same list of results for the same query submitted by different users, personalised IR systems tailor search results to a particular user based on his/her interests. However, user’s preferences are heterogeneous and changing dramatically in different situations. Information about the environment surrounding user, or user’s context, can be usefully exploited to improve search effectiveness. The goal of my research is to investigate the benefit of using contextual information for personalisation of IR systems. In particular, I studied how to leverage user’s location to personalise recommender systems and mobile search. In topic of recommender systems, I investigated the problem of suggesting contextually relevant places to a user visiting a new city based on his/her preferences and the location of the city. Based on TREC Contextual Suggestion track evaluations I demonstrated that my system not only significantly outperforms a baseline method but also performs very well in comparison to other runs submitted to the track, managing to achieve the best results in nearly half of all test contexts.
In mobile search area, I proposed a novel approach for location-based personalization of mobile query auto-completion (QAC). Experiments on large-scale mobile query logs showed significant improvements of proposed technique compared to the state-of-the-art QAC approach based on each user’s short- and long-term search histories.
The results for both proposed IR systems demonstrated that the location of the user plays an important role for personalization of QAC as well as for place recommender systems. The next step of my research might be devoted to further aspects of geographical context for mobile search. In particular, it would be useful to take into account metadata about organizations/places near user’s location. Another possible direction is to investigate a mechanism for predicting the possible gain from location-based personalization before applying it.

Towards Green Information Retrieval

18 July 2014, 11:00 - Location: C-40

Speakers:
Ana Freire (Yahoo Research Lab Barcelona )
Referent:
Raffaele Perego

Web search engines have to deal with a rapid increase of information, demanded by high incoming query traffic. This situation has driven companies to build geographically distributed data centres housing thousands of computers, consuming enormous amounts of electricity and requiring a huge infrastructure around. At this scale, even minor efficiency improvements result in large financial savings.
This seminar will introduce the problem of Green Information Technologies (Green IT) and more concretely, Green Information Retrieval (Green IR), by exploring several (still few) approaches for developing sustainable search engines data centres. The talk mainly focuses on recent models that propose the use of only the necessary number of machines for solving queries, depending on the incoming query traffic. These models reduce the environmental impact and, subsequently, increase the financial savings.

Francesco Banterle: Acquisition, processing and display of High Dynamic Range Data in the Rendering Pipeline
Luca Pappalardo: Human Mobility, Social Networks, and Economic Development
Rossano Venturini: Succinct Data Structures

02 July 2014, 10:30 - Location: A-27

Referent:
Andrea Esuli

Nello "Young Research(ers) Workshop 2/2" tre dei sei vincitori dell'ISTI Young Researcher Award 2014 ( http://www.isti.cnr.it/news/yawards.php ) presenteranno una selezione dei risultati della loro attività di ricerca.
Tutto il personale è invitato, con particolare riguardo ai giovani dell'istituto.

--------------------------------

Programma dello Young Research(ers) Workshop 2/2:

Speaker: Francesco Banterle
Title: Acquisition, processing and display of High Dynamic Range Data in the Rendering Pipeline
Abstract: In this talk, I will give an overview on how to capture, process, and display HDR images inside a rendering pipeline; through recent works of my research. HDR datum are important because they are becoming available to consumers in an efficient and easy way. For example modern mobile phones (e.g. iPhone, Nexus, etc.) can shoot in HDR producing compelling pictures. HDR imaging is not only about having nice pictures, but it is about capturing real-world colors and luminance values without overexposed and underexposed pixels. This information is very valuable for better processing and increasing the realism in applications such as relighting.

-----

Speaker: Luca Pappalardo
Title: Human Mobility, Social Networks, and Economic Development
Abstract: Previous investigations have proposed the existence of a positive association between social relations and economic opportunities on an individual level. Based on the availability of new (big) data sources, it has recently become possible to empirically verify the persistence of this relationship on (i) a more general community level and (ii) within larger territories like, for instance, nations. In this paper we expand this line of research to the country of France. Based on widely gathered CDR data and the smallest possible census level data, we investigate the specificities of the relationship between social network diversity and several socio-economic data. In a second part, we challenge the idea that social network diversity is the best, let alone, the only structural associator of socio-economic development. We introduce metrics for individual mobility diversity based on the spatial traces of the CDR data and we evaluate their relationship with the same socio-economic indicators. Our findings show that individual mobility diversity prevails a stronger association with several socio-economic indicators compared to social network diversity. This challenges us to question the influence of social network diversity on economic development in a more multi-dimensional logic.

-----

Speaker: Rossano Venturini
Title: Succinct Data Structures
Abstract: The last few years have seen an intensive research on the so-called succinct and compressed data structures. These data structures mimic the operations of their classical counterparts within a comparable time complexity by requiring much less space. This goal is achieved by carefully combining ideas born both in algorithmic and information theory fields. Even if this area of research is relatively new, a large fraction of classical data structures have their succinct counterparts, and, since most of them are also of practical interest, many authors are successfully applying their underlying ideas to solve problems in other, more applicative, fields of research. In this talk we will introduce this field of research by presenting succinct solutions to represent set of integers, set of points, graphs and strings together with some practical problems in which they play a central role.

Renewal intermittency in complex systems

02 July 2014, 11:00 - Location: C-29

Speakers:
Paolo Paradisi
Referent:
Umberto Barcaro

The interest towards self-organized and cooperative systems has rapidly increased in recent years. This is related to the increasing interest in the evolution of complex networks, such as the web and, more in general, social dynamics (not only associated with web applications), but also biological applications such as neural networks, human brain dynamics and metabolic networks in the cell. This hot research field is sometimes denoted as complexity science.
A fully accepted definition of “complex system” does not yet exist, but there’s some agreement on some general features. A complex system is typically made of many individual units with strong non-linear interactions and the global dynamics of this multi-component system is characterized by emergent properties, i.e., the emergence of cooperative, self-organized, coherent structures that are hardly explained in terms of microscopic dynamics. Thus, the typical approach is focused on emergent properties and their dynamical evolution.
Along this line, a crucial property that has been observed is given by the meta-stability of the emerging self-organized structures, that is, the dynamics of a complex system is characterized by a birth-death process of cooperation [Allegrini et al., 2009]. In time series analysis, this is mathematically described in the framework of point processes and, in particular, of renewal theory [Cox, 1962]. A renewal process is a sequence of critical short-time events, with abrupt memory decays, thus dividing the time series into separate segments with long-range memory [Paradisi et al., 2013; Allegrini et al., 2013; Fingelkurts et al., 2008]. As a consequence, the inter-event times are statistically distributed according to a inverse power-law, a condition also denoted as fractal intermittency. The sequence of renewal events with fractal distribution of inter-event times are associated with anomalous diffusion processes.
Here we will show how simple random walks driven by complex (i.e., fractal intermittent) events generate anomalous diffusion and long-range correlation despite the presence of memory erasing events in the time series. Then, we discuss different approaches for the statistical characterization of diffusion scaling in time series with intermittent events. We discuss the robustness of diffusion scaling generated by event-driven random walks with respect to the presence of noisy events, here modelled as events with Poisson statistics, as such events are not able to generate anomalous diffusion, but only normal diffusion (i.e., variance linearly increasing in time).

[l] Allegrini et al., Physical Review E 80, 061914 (2009).
[2] D.R. Cox, Renewal Theory Ed.: Methuen, London (1962).
[3] Paradisi et al., AIP Conf. Proc. 1510, 151-161 (2013).
[4] Allegrini et al., Chaos Solit. Fract. 55, 32-43 (2013).
[5] Fingelkurts and Fingelkurts, Open Neuroimaging J. 2, 73-93 (2008).

Gastrin releasing peptide attenuates reconsolidation of conditioned fear memory in rats

02 July 2014, 15:45 - Location: C-29

Speakers:
Anthony Murkar (Trent University, Canada)
Referent:
Umberto Barcaro

Anthony Murkar is one of the four Canadian graduate students who are performing a two-week internship activity at the Signals and Images Lab. He graduated in psychology and is carrying out research in the field of cognitive sciences. In this short seminar, he will present recent results obtained by applying quantitative methods in experimental neurosciences.

Assessing the effects of meditation and sleep mentation therapy in recovering alcohol and drug addicts

02 July 2014, 15:00 - Location: C-29

Speakers:
Nicolle Miller (Trent University, Canada)
Referent:
Umberto Barcaro

Nicolle Miller is one of the four Canadian graduate students who are performing a two-week internship activity at the Signals and Images Lab. She graduated in psychology and is carrying out research in the field of cognitive sciences. In this short seminar, she will present recent results obtained by means of textual statistical analysis of mentation reports, in particular dream reports.

The effects of gratitude journaling on the vicissitudes of mood in recovering addicts and the role of dreams as predictive variables

01 July 2014, 15:45 - Location: C-29

Speakers:
Patrick Fox (Trent University, Canada)
Referent:
Umberto Barcaro

Patrick Fox is one of the four Canadian graduate students who are performing a two-week internship activity at the Signals and Images Lab. He graduated in psychology and is carrying out research in the field of cognitive sciences. In this short seminar, he will present recent results obtained by means of textual statistical analysis of mentation reports, in particular dream reports.

Ontogenetic patterns and gender dimensions in the dreams of Canadians

01 July 2014, 15:00 - Location: C-29

Speakers:
Allyson Dale (Ottawa University, Canada)
Referent:
Umberto Barcaro

Allyson Dale is one of the four Canadian graduate students who are performing a two-week internship activity at the Signals and Images Lab. She graduated in psychology and is carrying out research in the field of cognitive sciences. In this short seminar, she will present recent results obtained by means of textual statistical analysis of mentation reports, in particular dream reports.

Requirements Engineering for Cloud Applications

25 June 2014, 11:00 - Location: C-40

Speakers:
Ana Sofia Zalazar (Universidad Tecnologica Nacional, Argentina)
Referent:
Alessio Ferrari

Cloud computing is a new business paradigm managed through Internet, where different providers offer their services using scalable virtualization. Services selection depends of the service level agreement, which is a type of contract signed between providers and consumers, and it identifies functional and quality parameters of services. Because of the stochastic and dynamic nature of cloud contexts, there is not a simple procedure for requirement specification in software as a service model (SaaS). Analysts and designers must consider new dimensions (as legal and financial aspects) in cloud contexts. The thesis objective is to find requirement engineering artefacts and tools to represent each dimension of cloud services. The short stay objective is to study the requirements models for desktop applications have to be transformed and enriched when the applications are moved to the cloud considering different dimensions of cloud services.

Mean-Field approximation and Quasi-Equilibrium reduction of Markov Population Models

19 June 2014, 10:15 - Location: C-29

Speakers:
Rytis Paskauskas
Referent:
Mieke Massink

Markov Population Model (MPM) is a commonly used framework to describe stochastic systems. Their exact analysis is not feasible in most cases because of the so-called state space explosion problem. Approximations are usually sought, often with the goal of reducing the number of variables. Among them, the mean field limit and the quasi-equilibrium reductions stand out. The basic principle of the quasi-equilibrium reduction is separation of temporal scales. Separation of temporal scales is a property that is found in such MPMs, that describe concurrent processes whose turn-over times vary over a wide range of scales. The method of quasi-equilibrium reduction is elimination of fast scales by averaging. Understanding the averaging property is important when a model is nonlinear: sometimes, multiscale effects and nonlinearities add up in unpredictable ways, with possibly damaging outcomes for the underlying system. My goal in this talk is to present these effects in some detail. An informal discussion will be given, supplemented with examples from real life and client-server based modelling, and an emphasis will be placed on potential pitfalls of using mean field reduction and mixing them with quasi-equilibrium reduction in the presence of multiple time scale events.

Transforming Big Data into Smart Data: Deriving Value via harnessing Volume, Variety, and Velocity using semantic techniques and technologies

09 June 2014, 11:00 - Location: A-27

Speakers:
Amit Sheth (Ohio Center of Excellence in Knowledge-enabled Computing - Wright state University – Dayton OH USA)
Referent:
Fausto Rabitti

Big Data has captured a lot of interest in industry, with anticipation of better decisions, efficient organizations, and many new jobs. Much of the emphasis is on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and technologies that handles volume, including storage and computational techniques to support analysis (Hadoop, NoSQL, MapReduce, etc), and. However, the most important feature of Big Data, the raison d'etre, is none of these 4 Vs -- but value. In this talk, I will forward the concept of Smart Data that is realized by extracting value from a variety of data, and how Smart Data for growing variety (e.g., social, sensor/IoT, health care) of Big Data enable much larger class of applications that can benefit not just large companies but each individual. This requires organized ways to harness and overcome the four V-challenges. In particular, we will need to utilize metadata, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP.

For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. Lastly, for Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships and uses them to better understand new cues in the data that capture rapidly evolving events and situations.

Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response and smart city. I will present examples from a couple of these.

Specifying and Verifying Properties of Space

30 May 2014, 10:15 - Location: A-32

Speakers:
Vincenzo Ciancia
Referent:
Mieke Massink

The interplay between process behaviour and spatial aspects of computation has become more and more relevant in Computer Science, especially in the field of collective adaptive systems, but also, more generally, when dealing with systems distributed in physical space. Traditional verification techniques are well suited to analyse the temporal evolution of programs; properties of space are typically not explicitly taken into account. We propose a methodology to verify properties depending upon physical space. We define an appropriate logic, stemming from the tradition of topological interpretations of modal logics, dating back to earlier logicians such as Tarski, where modalities describe neighbourhood. We lift the topological definitions to a more general setting, also encompassing discrete, graph-based structures. We further extend the framework with a spatial until operator, and define an efficient model checking procedure, implemented in a proof-of-concept tool.

Young Research(ers) Workshop 1/2

21 May 2014, 10:30 - Location: A-27

Referent:
Andrea Esuli

Nello "Young Research(ers) Workshop 1/2" tre dei sei vincitori dell'ISTI Young Researcher Award 2014 ( http://www.isti.cnr.it/news/yawards.php ) presenteranno una selezione dei risultati della loro attività di ricerca.

Tutto il personale è invitato, con particolare riguardo ai giovani dell'istituto.

Lo Young Research(ers) Workshop 2/2 con gli altri tre vincitori si terrà il 2 Luglio.

Programma:

Speaker: Gianpaolo Coro

Title: Providing Statistical Analysis Algorithms as-a-Service by means of a distributed e-Infrastructure

Abstract:

In computational statistics, interest is growing towards modular and pluggable solutions that enable the repetition and the validation of experiments and allow the exploitation of statistical algorithms in several contexts. Furthermore, such procedures are requested to be remotely hosted and to "hide" the complexity of the calculations, especially in the case of cloud computations. For such reasons, the usual solution of supplying modular software libraries containing implementations of algorithms is leaving the place to Web Services accessible through standard protocols and hosting such implementations. E-Infrastructures allow to provide algorithms as-a-Service, to hide calculation complexity and to enable experimental results and parameters sharing. This talk will present the so called "gCube Statistical Manager" system, a distributed network of web services that supports distributed processing as well as Cloud processing. This system is meant to be used and enriched with new algorithms by several communities of practice of an e-Infrastructure. It hides the complexity to manage algorithms deployment and adaptation, and facilitates results access and sharing. For example, an algorithm developed by a biologist can be imported "as-is" on the platform, which will automatically enable multi-user access, distributed and Cloud processing, a user interface and results sharing. The talk will support the system effectiveness by means of practical examples from Computational Biology.

Speaker: Anna Monreale

Title: Privacy-by-design in Data Analytics and Social Mining

Abstract:

Privacy is ever-growing concern in our society: the lack of reliable privacy safeguards in many current services and devices is the basis of a diffusion that is often more limited than expected. Moreover, people feel reluctant to provide true personal data, unless it is absolutely necessary. Thus, privacy is becoming a fundamental aspect to take into account when one wants to use, publish and analyze data involving sensitive information. Unfortunately, it is increasingly hard to transform the data in a way that it protects sensitive information: we live in the era of big data characterized by unprecedented opportunities to sense, store and analyze social data describing human activities in great detail and resolution. As a result privacy preservation simply cannot be accomplished by de-identification. In this talk we propose the privacy-by-design paradigm to develop technological frameworks for countering the threats of undesirable, unlawful effects of privacy violation, without obstructing the knowledge discovery opportunities of social mining and data analytical technologies. Our main idea is to inscribe privacy protection into the knowledge discovery technology by design, so that the analysis incorporates the relevant privacy requirements from the start.

Speaker: Giuseppe Ottaviano

Title: Inverted indexes compression with Elias-Fano

Abstract:

Inverted indexes are the core of every modern information retrieval system; to cope with steadily growing document corpora, such as the Web, it is crucial to represent them as efficiently as possible with respect to space and decompression speed. Recently a data structure traditionally used in the field of succinct data structures, namely the Elias-Fano representation of monotone sequences, has been applied to the compression of indexes, showing surprisingly excellent performance and good compression. This talk will start by briefly reviewing the Elias-Fano representation and its application to inverted indexes. We will then move on to a new representation based on a two-level Elias-Fano data structure, which significantly improves compression efficiency by (almost-)optimally partitioning the index, with only a negligible slow-down in decompression.

The digital journey for audiovisual contents

20 May 2014, 11:00 - Location: A-27

Speakers:
Daniel Teruggi (Institut National de l'Audiovisuel (Ina), Parigi, Francia )
Referent:
Carlo Meghini
Il seminario avrà come tema la presentazione delle problematiche particolari dei contenuti audiovisivi in termine di preservazione digitale. L'attenzione sarà rivolta principalmente a digitalizazione, migrazione, content research e ai numerosi aspetti della ricerca che si ritrovano in questo ambito, attraverso un'analisi della situazione dell'Ina e in Europa.

Descriptive and Visual Approaches in the Computer-Based Representation of Text in the Humanities

16 April 2014, 14:30 - Location: C-29

Speakers:
Tobias Schweizer (Digital Humanities Lab University of Basel)
Referent:
Vittore Casarosa

In the Humanities, the de facto standard to represent texts with the help of a computer is a markup language called TEI (Text Encoding Initiative) based on XML (previously SGML). TEI/XML favors a single abstract perspective on text by focusing on categories like chapters and paragraphs, due to its roots in descriptive markup (designed by Goldfarb in the 70s at IBM).

However, today an enormous amount of digital images of handwritten and printed documents is available on the internet, representing these sources in respect to their visual appearance. Philological disciplines get more and more interested in describing how these source documents look like, rather than just explicitly representing their textual content.

In the seminar a web-based tool for creating transcriptions will be introduced. It does not primarily rely on the principles of descriptive markup, but tries to take into consideration the spatial relations of manuscripts. Once an explicit interpretation of the text is made (separating textual information from the manuscript), it can be exported as TEI/XML.

Short CV of Tobias Schweizer: Tobias Schweizer studied history, German literature and computer science at the University of Basel. Since 2010 he has been an assistant and PhD student at the Digital Humanities Lab working on the development of SALSAH - a web-based virtual research environment for the humanities - and on the subject of digital editing. He will defend his PhD in April 2014.

Modeling for Entertainment industry: semi-automatic, patch-based quadrangulation

17 March 2014, 15:00 - Location: I-53

Speakers:
Giorgio Marcias
Referent:
Roberto Scopigno

We give an overview of an ongoing research, aimed at the design of a semi-automatic method for quad mesh generation. Quad meshes are mandatory in many applications in computer graphics: animation, Fem analysis, Cad modeling. In particular, entertainment industry requires the geometry to undergo to specific constraints. The needs of semiautomatic instruments to convert triangle meshes (typically acquired from real world) to quad-dominant meshes has been shown by recent works. The approach we propose is based on a two-phases process: 1) acquisition and classification of synthesized patches from a set of manually, well-designed, artist-made quad meshes; 2) selection and stitching of the more proper patches, based on some boundary constraints defined by the user.

Short Bio: Giorgio Marcias è un dottorando del Laboratorio Visual Computing, attualmente al secondo anno di dottorato.

Management of a bike sharing system - an operations research perspective

26 February 2014, 11:00 - Location: C-40

Speakers:
Fabio Shoen (Dipartimento di Ingegneria dell'Informazione - Univ. degli Studi di Firenze)
Referent:
Stefania Gnesi

Modern bike sharing systems allow to monitor in real time the situation of bike availability at bike sharing parking places. Among the many management aspects to be considered, we will focus on the re-positioning problem. This is the problem faced by the manager of the system who would like to relocate bikes in such a way that bikes will be available where there will be a likely demand, and that empty places will be available at stations where users will deliver their bikes.
In order to solve the problem two aspects need to be considered: first a statistical analysis has to be performed in order to be able to predict future needs for bikes and places; second, given a prediction, optimal tours for a set of relocation vehicles have to be planned.
In this talk I will introduce these topics and outline some algorithmic proposals.

Computational Analysis in Cultural Heritage Applications

17 February 2014, 11:30 - Location: C-40

Speakers:
Tim Weyrich (Department of Computer Science, University College London )
Referent:
Roberto Scopigno

Through the increasing availability of high-quality consumer hardware for advanced imaging tasks, digital imaging and scanning are gradually pervading general practice in cultural heritage preservation and archaeology. In most cases, however, imaging and scanning are predominantly means of documentation and archival, and digital processing ends with the creation of a digital image or 3D model. At the example of three projects, the speaker will demonstrate how careful analysis of the underlying cultural-heritage questions allows for bespoke solutions that--through joint development of imaging procedures, data analysis and visualisations--directly support conservators and humanities researchers in their work. Tim Weyrich will report on his experiences with fresco reconstruction at the Akrotiri Excavation, Santorini, on the reconstruction of fire-damaged parchment with London Metropolitan Archives, and on the analysis of Egyptian papyri with the Petrie Museum in London.

Short Bio:

Tim Weyrich is an Associate Professor in the Virtual Environments and Computer Graphics group in the Department of Computer Science, University College London, and co-founder and Associate Director of the UCL Centre for Digital Humanities. Prior to coming to UCL, Tim was a Postdoctoral Teaching Fellow of Princeton University, working in the Princeton Computer Graphics Group, a post I took after receiving his PhD from ETH Zurich, Switzerland, in 2006. His research interests are appearance modelling and fabrication, point-based graphics, 3D reconstruction, cultural heritage computing and digital humanities.

Single-material elasticity approximation

14 February 2014, 11:00 - Location: C-29

Speakers:
Luigi Malomo (Dipartimento d'Informatica - Università di Pisa)
Referent:
Andrea Esuli

3D printing is becoming a pervasive technology in a variety of applications. The main advantage of these solutions is that the structural complexity of the printed objects does not affect time and cost of the printing process. 3D printing democratized the concept of industrial production, indeed today it is possible to use 3D printing to manufacture arbitrary shapes. One limitation of the existing solutions is that they allow the manufacturing using a single material only (e.g. ABS, PLA, resin, etc.). This reduces considerably the application domains, because it is also desirable the creation of 3D-printed pieces with continuously varying physical properties. Although multimaterial 3D printers are increasingly available, they are extremely expensive and they rely on a limited set of materials. After an overview of the 3D printing technologies, we will propose a novel method that, using geometric patterns, aims to approximate the physical behavior of a given material employing single-material 3D printing.

Evaluation of conceptual models for digital preservation

13 February 2014, 10:00 - Location: I-07a

Speakers:
Kumar Naresh (DILL (International Master in Digital Libraries Learning) )
Referent:
Carlo Meghini

Keeping alive the audio-visual (AV) scientific content for next generations is our responsibility and need of hour. Keeping this in mind various preservation initiatives are going on in Europe and around the world. One such project is Presto4U, which aims at automatic matching preservation tools with needs of Audio-visual applications for different communities of practice. It is based on conceptual model called "CoP Knowledge Schema" which needs to be evaluated to successfully achieve the above aim. This Seminar will focus on how this conceptual model will be evaluated.

Personalization and Contextualization in Query Auto Complete

17 December 2013, 11:00 - Location: C-29

Speakers:
Francesco Nidito (Microsoft BING)
Referent:
Patrizio Dazzi

Personalization and Contextualization are two somehow different, but related, very interesting topics in research and two of the most valuable “weapons” in the arsenal of any modern search engine. Personalization focuses on the user’s demographics, past queries and is meant to tailor suit the results on a particular user. On the other hand, Contextualization focuses on augmenting the query with pieces of information related to the actual context in which the query is typed. In this talk we will explore more in depth the differences between the two concepts and we will see some of theirs applications in modern search engines with particular reference to the Query Auto Complete feature of a search engine.
DISCLAIMER: This talk is entirely based on publically available research studies and publications made by Microsoft Research.

Process Ordering in a Process Calculus for Spatially-Explicit Ecological Models

12 November 2013, 11:00 - Location: A-32

Speakers:
Mauricio Toro Bermudez (ERCIM- Postdoctoral Researcher at the Computer Science Department, University of Cyprus.)
Referent:
Stefania Gnesi

During my postdoc, we extend PALPS, a process calculus proposed for the spatially-explicit individual-based modeling of ecological systems, with the notion of a policy. A policy is an entity for specifying orderings between the different activities within a system. It is defined externally to a PALPS model as a partial order which prescribes the precedence order between the activities of the individuals of which the model is comprised. The motivation for introducing policies is twofold: one the one hand, policies can help to reduce the state-space of a model; on the other hand, they are useful for exploring the behavior of an ecosystem under different assumptions on the ordering of events within the system.
To take account of policies, we refine the semantics of PALPS via a transition relation which prunes away executions that do not respect the defined policy. Furthermore, we propose a translation of PALPS into the probabilistic model checker PRISM.
We illustrate our framework by applying PRISM on PALPS models with policies for conducting simulation and reachability analysis.

ZigBee ed applicazioni pratiche del protocollo

08 November 2013, 14:00 - Location: C-29

Speakers:
Felice Bacco
Referent:
Erina Ferro

Il seminario sarà focalizzato sul protocollo ZigBee e sarà diviso in due parti: una iniziale teorica che descriva il protocollo ed una seconda parte più pratica, con una piccola sessione pratica di programmazione. L'architettura dello stack sarà discussa nel dettaglio nella prima parte, assieme ai concetti propri del protocollo. Livello per livello saranno descritte le funzionalità proprie, con attenzione particolare al security model. L'implementazione Z-STACK della Texas Instruments è oggetto della seconda parte del seminario, con dettaglio sull'ambiente di programmazione e le API a disposizione dei programmatori. Si procederà al set-up di una piccola applicazione di esempio, con attenzione alle funzionalità base di un nodo ZigBee e ai metodi di debug. Si mostrerà dal vivo una piccola applicazione con l'utilizzo di dongle TI CC2531. Ancora, preliminari risultati sul campo circa l'utilizzo di sensoristica ZigBee in mobilità saranno mostrati e commentati.

The origin of heterogeneity in human mobility

25 October 2013, 11:00 - Location: C-29

Speakers:
Luca Pappalardo (Dipartimento d'Informatica - Università di Pisa)
Referent:
Andrea Esuli

Researchers recently discovered that traditional mobility models adapted from the observation of particles or animals (such as Brownian motion and Levy-flights), and recently from the observation of dollar bills, are not suitable to describe people's movements. Indeed, at a global scale humans are characterized by a huge heterogeneity in the way they move, since a Pareto-like curve was observed in the distribution of the characteristic distance traveled by users, the so called radius of gyration. Although these discoveries have doubtless shed light on interesting and fascinating aspects about human mobility, the origin of the observed patterns still remains unclear: Why do we move so differently? What are the factors that shape our mobility? What are the movements or the locations that mainly determine the mobility of an individual? Can we divide the population into categories of users with similar mobility characteristics? The above issues need to be addressed, if we want to understand key aspects about the complexity behind our society. In the current work, we exploit the access to two big mobility datasets to present a data-driven study aimed to detect and understand the main features that shape the mobility of individuals. By using a dualistic approach that combines strengths and weakness of network science and data mining, we show how the mobility of a significant portion of population is characterized by the patterns of commuting between different and distant mobility "hearts".

Functional and Non-functional model-based test design

08 October 2013, 11:00 - Location: I-07a

Speakers:
Federico Toledo (Universidad de Castilla-La Mancha (UCLM), Spagna)
Referent:
Antonia Bertolino

The talk will report on a proposal to provide a black-box and model-based system testing approach, integrating functional and nonfunctional aspects.
The approach extends the UML-TP profile with concepts for designing nonfunctional tests, so that by trasnformation it is possible to generate test cases dealing with functional and non-functional requirements.

Prefab: What If Anybody Could Modify Any Interface?

24 September 2013, 11:00 - Location: C-29

Speakers:
James Fogarty (University of Washington )
Referent:
Fabio Paternò

Widely-used interface toolkits currently stifle the progress and impact of user interface research. Advances are limited by both the rigidity of current interfaces and the fragmentation of applications among many different underlying toolkits. Many promising innovations therefore remain difficult or impossible to deploy.
Prefab examines pixels as a universal representation of the desktop. By reverse engineering the pixel-level appearance of interface elements, Prefab enables runtime interface modification without access to source and without cooperation from the underlying application. I will motivate this approach, present Prefab's pixel-based methods, and show several examples of runtime interface enhancements. This will include our implementation of the first general-purpose target-aware pointing enhancement, an idea proposed more than 15 years ago that has previously been considered impractical to actually deploy. I will conclude by discussing the potential of this work for catalyzing future interaction research and beginning to democratize our everyday interfaces.

Problemi Irrisolti in motori di ricerca per commercio elettronico

05 September 2013, 14:30 - Location: C-29

Speakers:
Alex Cozzi (eBay)
Referent:
Fabrizio Silvestri

La popolarità e l'importanza economica dei motori di ricerca per l'internet ha portato allo sviluppo di diverse soluzioni che hanno trovato grande impiego pratico. Nel mio intervento descriverò come le stesse tecniche siano spesso inadeguate quando applicate a problemi di ricerca per il commercio elettronico.

Short Bio:
Alex Cozzi fa parte del gruppo di ricerca applicata a eBay, in California da 3 anni. Precedentemente ha lavorato con apprendimento automatico applicato alle stime di rilevanza di documenti per Yahoo! e prima ancora per IBM. I suoi interessi di ricerca sono l'applicazione di tecniche statistiche e di apprendimento automatico e a linguaggi avanzati di programmazione.

Exploring the Detection of Geographic Communities, their Evolution and Applications

22 July 2013, 10:30 - Location: C-29

Speakers:
Cecilia Mascolo (Cambridge University)
Referent:
Fabrizio Silvestri

It is well-known that space plays an important role in social networks, with pairs of geographically close individuals being more likely to have a connection than those far apart. However, lack of available data has meant that the spatial properties of larger social groups are as yet less well-understood. Do social communities tend to be geographically tight, even in the age of easy long-distance communication? How is the mobility of individuals and the places they frequent related to the existence of these groups?

Thanks to information about users' fine-grained location from increasingly geographically-aware online social services, we can start to investigate these questions. We will present some results from our initial analysis of the spatial properties of social groups in location based online social networks, and discuss how the answers could not only be interesting from a sociological perspective, but also have potential applications in online social services as mobile computing continues to expand.

Biografia: Dr. Cecilia Mascolo is a Reader in Mobile Systems in the Computer Laboratory, University of Cambridge. Her research interests are in human mobility modeling. She has published in the areas of mobile computing, mobility and social modeling and sensor networking.Cecilia's research is funded by research councils as well as by industry such as Alcatel, Google, Intel, Microsoft and Samsung. Dr Mascolo has served as an organizing and programme committee member of many mobile, sensor systems and networking conferences and workshops including Conext, COSN, KDD, Mobihoc, Mobicom, HotMobile, Percom, Sensys, Ubicomp. She is part of the editorial boards of IEEE Internet Computing, IEEE Pervasive Computing and IEEE Transactions on Mobile Computing. She is teaching courses on mobile and sensor systems and an MPhil course on social and technological network analysis. More details are available at www.cl.cam.ac.uk/users/cm542.

Energy Efficient networking

28 June 2013, 14:00 - Location: C-29

Speakers:
Franco Davoli (DITEN-University of Genoa and CNIT - University of Genoa Research Unit - Coordinator of the Master Program in Multimedia Signal Processing and Telecommunication Networks)
Referent:
Erina Ferro

Great attention to energy efficiency is being dedicated in all Information and Communication Technology (lCT) sectors. In particular, besides data centers and mobile networks, where the concept already got a firm stand, fixed networks start to be structured and operated with energy saving goals in mind. Network operators and Internet Service Providers (lSPs) are motivated by the need to reduce both C02 emissions and operational costs (OPEX) in terms of energy. Among other initiatives, the ECONET project, a 3-year large-scale Integrating Project (IP) funded by the European Commission in the 7th Framework Program, is investigating, developing and testing new capabilities for the Future Internet devices that can enable the efficient management of power consumption, so to strongly reduce the current network energy waste. By considering the current network structure and trends, traffic profiles and service forecasts, an evolutionary approach can be envisaged, with the goal of introducing novel green network-specific paradigms and concepts that would gradually lead to the reduction of energy requirements of wired network equipment by 50% in the short to mid-term (and by 80% in the long run) with respect to the business-as-usual (BAU) scenario. To this end, the main challenge will be to design, develop and test novel technologies, integrated control criteria and mechanisms for network equipment capable of enabling energy saving. While dynamically adapting network capacities and resources to current traffic loads and user needs, the control framework should ensure at the same time the satisfaction of end-to-end Quality of Service (QoS) requirements. The talk will outline the vision of the ECONET project in this respect, with particular emphasis on dynamic adaptation (adaptive rate and low power idle) and smart sleeping techniques applied at the device and network level, and on the concept of Green Abstraction Layer (GAL), a standard interface to provide a common vision of heterogeneous data plane energy-aware hardware to the control plane.

DEMON: Uncovering Overlapping Communities with a Local-First Approach

24 June 2013, 11:00 - Location: C-29

Speakers:
Giulio Rossetti
Referent:
Andrea Esuli

Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. DEMON is a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection.

NOTE: This seminar belongs to the series of seminars presented by the winners of the prize "Young researchers ISTI 2013". Giulio Rossetti placed second in the PhD student category

.

Beyond Visual Computing: Interactive Auditory Display

14 June 2013, 11:00 - Location: A-32

Speakers:
Ming C. Lin (Department of Computer Science, University of North Carolina at Chapel Hill)
Referent:
Paolo Cignoni

Extending the frontier of visual computing, an interactive auditory display utilizes sound to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, sound or audio rendering can further enhance a user’s experience in a multimodal virtual world. In addition to immersive environments, audio feedback can provide a natural and intuitive human-computer interface for many desktop applications such as computer games, online virtual worlds, visualization, simulation, and training.

In this talk, I will give an overview of our recent work on sound synthesis and sound propagation. These include generating realistic physically-based sounds from rigid body dynamic simulation and liquid sounds based on bubble resonance coupled with fluid simulators. I will also describe new and fast algorithms for sound propagation based on improved numerical techniques and fast geometric sound propagation based on extension of ray-tracing techniques. These algorithms improve the state of the art in sound rendering by at least one to two orders of magnitude and I will demonstrate their performance on interactive sound synthesis and propagation in complex, dynamic environments by utilizing the parallelism of many-core processors and GPUs commonly available on desktop systems.

Compressed Static Functions with Applications

13 June 2013, 14:30 - Location: C-29

Speakers:
Rossano Venturini
Referent:
Andrea Esuli

Given a set of integer keys from a bounded universe along with associated data, the dictionary problem asks to build a data structure able to answer efficiently two queries: membership and retrieval.

Membership has to tell whether a given element is in the dictionary or not; Retrieval has to return the data associated with the searched key.

This seminar will describe a recent result presented at ACM-SIAM SODA 2013 which provides time and space optimal solutions for three well-established relaxations of this basic problem: (Compressed) Static functions, Approximate membership and Relative membership.

NOTE: This seminar belongs to the series of seminars presented by the winners of the prize "Young researchers ISTI 2013". Rossano Venturini placed third in the Young researcher category.

Growing Least Squares for Multi-scale surface analysis and editing

12 June 2013, 15:00 - Location: C-29

Speakers:
Nicolas Mellado ( Manao Research Team INRIA - (Bordeaux))

In the first part of the talk, a generalization of Algebraic Point Set Surfaces to scale-space analysis called Growing Least Squares will be presented. This new analysis framework provide tools to detect and characterize features of arbitrary shapes defined at multiple scales on point-set data. In the second part, work-in-progress applications for 3D model inspection, enhanced shading effects and multi-scale surface editing will be shown.

Bayes Theorem and Stochastic Process Prediction

11 June 2013, 14:30 - Location: C-29

Speakers:
Pietro Cassarà
Referent:
Francesco Potortì

Bayesian inference is recognized as a general framework for performing optimal stochastic process prediction. Fundamentally, it assumes that the uncertainty in our knowledge of the state of the observed system may be well represented by probabilities. Bayes’ theorem then provides the basic mechanism whereby measurements update these probabilities and, hence, our knowledge of the system state. For computer implementation of a Bayesian scheme, a representation of the probabilities must be selected. Various approaches have been developed, including Kalman filters, grid-based models, and particle filters.

This theory finds application in may technological fields, specially when a process must be predicted given a measurement set. The theorem provides a friendly way for the process prediction given a set of measures exploiting the theory such as Kalman's Filter and Hidden Markov Process.

For this reason in the years many efforts have been made to optimize the implementation techniques of the Bayes' Theorem by methods such as the Grid-Based and Particle Swarm.

The topic of this talk is to clarify the theory behind the Bayes theorem, and to show some use cases in the field of the network systems, of the theorem.

A Classification for Community Discovery Methods in Complex Networks

10 June 2013, 11:00 - Location: C-29

Speakers:
Michele Coscia
Referent:
Andrea Esuli

In the last few years many real-world networks have been found to show a so-called community structure organization. Much effort has been devoted in the literature to develop methods and algorithms that can efficiently highlight this hidden structure of the network, traditionally by partitioning the graph. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition it then extracts the communities that are able to reflect only some of the features of real communities. The aim of this survey is to provide a manual for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery based on their own definition of community. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on.

NOTE: This seminar is the sixth one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Michele Coscia placed second in the Young Researcher category.

Perspectives of Human Mobility in Location-based Social Networks: Models and Applications

07 June 2013, 11:00 - Location: C-29

Speakers:
Anastasios Noulas (Cambridge University)
Referent:
Fabrizio Silvestri

In this talk I will be sharing my experience of mining, analysing and modeling user mobility in location-based social networks in the past couple of years. We will initially present an agent based modeling approach that captures fundamental properties of human urban movement in 34 metropolitan cities. The work that is based on a dataset comprised of millions of Foursquare user check-ins will shed light on the role of geography and the spatial density of settlements in human mobility. Subsequently, we will consider two different mobile application scenarios. First we will formulate a mobile recommendation task where the goal is to accurately new places a user will visit in future time periods. We shall see how a random walk on a graph that connects users with places can offer good recommendations in an extremely sparse data context. Next, we shall formulate an even more challenging problem where we try to predict the next check-in of a user in real time. Exploiting a supervised learning approach that employs synchronously multiple layers of mobility data, we will show that accurate predictions of user whereabouts can be attained if models are trained appropriately. Finally, we will discuss on the feedback loop between abstract models and applications and how mutual benefits emerge in two seemingly distant areas of research in human mobility.

Biografia: Anastasios Noulas is a PhD candidate in the Computer Laboratory at Cambridge University. He holds an MEng in Computer Science (2009) from University College London. His research interests are focused on the analysis and modelling of human movement and geographic social networks, with techniques that span the areas of Complex Systems, Data Mining and applied Machine Learning. In 2012 he joined Telefonica Research in Spain for a research fellowship participating in a project on Smart Cities that exploited Call Detail Records and data sourced from location-based services to infer activities in urban environments. As of 2013 he has joined the EPSRC project GALE that aims to the development of the next generation of mobile recommender systems and urban neighbourhood modelling for the provision of local knowledge to remote visitors of an area. More details are available http://www.cl.cam.ac.uk/~an346/

Learning to Predict Tourists Movements

03 June 2013, 11:00 - Location: C-29

Speakers:
Cristina Muntean
Referent:
Andrea Esuli

Touristic applications stirred an increased interest in the last years due to the intense use of mobile devices and location based applications. Our outlook on this matter is directed towards the next point of interest (PoI) prediction task. We tackle the problem of predicting the “next” geographical position of a tourist, given her history (i.e., the prediction is done accordingly to the tourist’s current trail) by means of supervised learning techniques.
We test our methods on three datasets built using geo-tagged pictures downloaded from Flickr, each collection corresponding to a popular touristic area. We adopt two popular Machine Learning methods, namely Gradient Boosted Regression Trees and Ranking SVM for learning to rank the next PoI, on the basis of an object space represented by a multi-dimensional feature vector, specifically designed for tourism related data. We define a set of 68 different features, broadly classified into two main categories, namely “Session” and “PoI”. Session features are meant to model the tourist behavior and capture concepts like groups of PoIs visited, distances among PoIs and other characteristics of a user session (trail). On the other hand, PoI features model the characteristics of a candidate PoI, also taking into account the past activities of the tourist.
We propose a thorough comparison of several methods that are considered state-of-the-art in touristic recommender and trail prediction systems (WhereNext, Random Walk with Restart), as well as a strong popularity baseline. As experiments show, the methods we propose constantly outperform, with up to 300% in terms of prediction accuracy, our baselines and provide strong evidence of the performance and robustness of our solutions.

NOTE: This seminar is the fifth one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Cristina Muntean placed third in the PhD student category.

Central Limit Approximation for Stochastic Model Checking

28 May 2013, 11:00 - Location: C-40

Speakers:
Roberta Lanziani (IMT Lucca)
Referent:
Mieke Massink

In this talk we investigate the use of Central Limit Approximation of Continuous Time Markov Chains to verify collective properties of large population models, describing the interaction of many similar individual agents. More precisely, we specify properties in terms of individual agents by means of deterministic timed automata with a single global clock (which cannot be reset), and then use the Central Limit Approximation to estimate the probability that a given fraction of agents satisfies the local specification.

A temporal logic approach to modular design of synthetic biological circuits

28 May 2013, 11:30 - Location: C-40

Speakers:
Laura Nenzi (IMT Lucca)
Referent:
Mieke Massink

We present a new approach for the design of a synthetic biological circuit whose behavior is specified in terms of signal temporal logic (STL) formulae. We first show how to characterize with STL formulae the input/output behavior of biological modules miming the classical logical gates (AND, NOT, OR). Hence, we provide the regions of the parameter space for which these specifications are satisfied. Given a STL specification of the target circuit to be designed and the networks of its constituent components, we propose a methodology to constrain the behavior of each module, then identifying the subset of the parameter space in which those constraints are satisfied, providing also a measure of the robustness for the target circuit design. This approach, which leverages recent results on the quantitative semantics of Signal Temporal Logic, is illustrated by synthesizing a biological implementation of an half-adder.

Using collective intelligence to detect natural language pragmatic ambiguities

27 May 2013, 11:00 - Location: C-29

Speakers:
Alessio Ferrari
Referent:
Andrea Esuli

We present a novel approach for pragmatic ambiguity detection in natural language (NL) requirements specifications defined for a specific application domain. Starting from a requirements specification, we use a Web-search engine to retrieve a set of documents focused on the same domain of the specification. From these domain-related documents, we extract different knowledge graphs, which are employed to analyse each requirement sentence looking for potential ambiguities. To this end, an algorithm has been developed that takes the concepts expressed in the sentence and searches for corresponding "concept paths" within each graph. The paths resulting from the traversal of each graph are compared and, if their overall similarity score is lower than a given threshold, the requirements specification sentence is considered ambiguous from the pragmatic point of view. A proof of concept is given throughout the presentation to illustrate the soundness of the proposed strategy.

NOTE: This seminar is the fourth one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Alessio Ferrari placed first in the Young researcher category.

Europe's new data protection framework and what it means to IT engineering

22 May 2013, 10:00 - Location: C-29

Speakers:
Simon Davies ( LSE Enterprise, a wholly owned subsidiary of The London School of Economics and Political Science, London, UK)
Referent:
Diego Latella

Europe - along with much of the developing world - is ready to implement a new and robust regime of data protection. The IT industry is carefully considering the challenges and the opportunities that these new legal regulations may create. Data protection provides a pillar of trust necessary to nurture emerging services and products. Even so, concern has been expressed that the new rules could hinder innovation and create barriers to design and engineering. Some companies believe the regulatory bar has now been set too high and that data protection will create substantial problems. In this talk veteran privacy expert Simon Davies discusses whether the proposed rules have struck the right formula.

Simon Davies is the Founder of Privacy International and Associate Director of LSE Enterprise. He has been a Visiting Fellow in Law at both the University of Greenwich and the University of Essex, and spent 13 years at LSE, where he taught the groundbreaking MSc Masters course in "Privacy & Data Protection". He is also co-director of LSE’s Policy Engagement Network. Simon Davies is widely acknowledged as one of the most influential data protection and internet rights experts in the world and is a pioneer of the international privacy arena. His work in consumer rights and technology policy has spanned over 25 years and has directly influenced the development of law and public policy in more than 40 countries. He has advised a wide range of corporate, government and professional bodies, including the United Nations High Commissioner for Refugees. Recently, Simon Davies has been tasked by cross-party rapporteurs of the European Parliament to conduct a wide-ranging external assessment of the European Commission's proposed reforms to the EU data protection framework. He brings a unique interface with global stakeholders, from major international corporations to government and civil society.

Nota: Martedi 21 maggio, cioè il giorno prima del seminario, alle ore 15, in Aula Faedo, verrà proiettata la registrazione di una puntata del programma Cyborg City della CNN che comprende, tra l'altro, un'intervista a Simon Davies.

Utility-Theoretic Ranking for Semi-Automated Text Classification

20 May 2013, 11:00 - Location: C-29

Speakers:
Giacomo Berardi
Referent:
Andrea Esuli

Suppose an organization needs to classify a set D of textual documents, and suppose that D is too large to be classified manually, so that resorting to some form of automated text classification (TC) is the only viable option. Suppose also that the organization has strict accuracy standards, so that the level of effectiveness obtainable via state-of-the-art TC technology is not sufficient. In this case, the most plausible strategy to follow is to classify D by means of an automatic classifier F, and then to have a human editor inspect the results of the automatic classification, correcting misclassifications where appropriate. The human annotator will obviously inspect only a subset D' of D (since it would not otherwise make sense to have an initial automated classification phase). We call this scenario Semi-Automated Text Classification (SATC). An automated system can support this process by ranking the automatically labelled documents in a way that maximizes the expected increase in effectiveness that derives from inspecting D'. An obvious strategy is to rank D so that the documents that F has classified with the lowest confidence are top-ranked. In this work we show that this strategy is suboptimal. We develop a new utility-theoretic ranking method based on the notion of inspection gain, defined as the improvement in classification effectiveness that would derive by inspecting and correcting a given automatically labelled document. We also propose a new effectiveness measure for SATC-oriented ranking methods, based on the expected reduction in classification error brought about by partially inspecting a list generated by a given ranking method.
We report the results of experiments showing that, with respect to the baseline method above, and according to the proposed measure, our ranking method can achieve substantially higher expected reductions in classification error.

NOTE: This seminar is the third one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Giacomo Berardi placed first in the PhD student category.

ASSERT Enabled Marketplace

15 May 2013, 11:00 - Location: C-29

Speakers:
Midhat Ali (University of Camerino)
Referent:
Andrea Polini

Platform providers establish marketplace ecosystems to sell to end users services and applications running on the platform. Very prominent examples of these marketplaces are the Google Play Store and Apple AppStore, among others. Application Developers can use these marketplaces to offer services and applications to end users.
Advanced Security Service cERTificate (ASSERT) for SOA (ASSERT4SOA) is a Framework designed to associate certificates of Security properties with the applications and services. This framework offers services like verification of ASSERTs and matching those ASSERTs to Security Properties specified in an appropriate query language. ASSERT Enabled Marketplace is a prototype to qualitatively evaluate and illustrate the use of ASSERT4SOA Framework in a marketplace which sells business applications, which have certificates issued for certain security properties. This marketplace is designed based on a typical user story of buying business applications based on the organization’s security requirements, comparing and choosing the best option based on the requirements.

The effects of meditation on dream imagery, depression and anxiety in Canadian students

14 May 2013, 16:30 - Location: C-29

Speakers:
Nicolle Miller (Masters level student, Department of Psychology, Trent University, Ontario, Canada)
Referent:
Umberto Barcaro

The current study examined the effects of meditation on waking day depression levels (BDI), waking day trait anxiety levels (BAI-T) and dream imagery in University students. The Storytelling Method was used to conduct dream interpretations, using word associations and story narrative. This seminar will address the results, which are consistent with past research, as well as the implications and applications of dream work and meditation in clinical and applied practice.

The neuropsychology of dreams, memories, and learning

14 May 2013, 16:00 - Location: C-29

Speakers:
Anthony Murkar (Masters student, Department of Psychology, Trent University, Ontario, Canada)
Referent:
Umberto Barcaro

The seminar presentation will discuss how dreams relate to memory. It is not fully understood how dreams are generated by the brain, but recent research has uncovered relationships between sleep, dreams, and memory function (Peigneux et al., 2004; Wamsley et al., 2010). Implications for understanding how brain activity relates to dreaming are discussed.

Dream work with Soldiers and Gamers

14 May 2013, 15:30 - Location: C-29

Speakers:
Allyson Dale (PhD student, Department of Psychology, Trent University, Ontario, Canada)
Referent:
Umberto Barcaro

Dream content and discovery have been studied comparing soldiers with operational experience in Afghanistan with an age matched sample of male civilians. Soldiers' dreams contained a higher frequency of aggression, threat, and military imagery, consistent with the continuity hypothesis.Dream interpretation for soldiers led to discovery about specific events from tours overseas and about aggressive behaviours.Further research compared the dream content of students, soldiers, and heavy gamers and results revealed significant differences in dream imagery for soldiers when compared to both students and gamers.

Compressed Static Functions with Applications

13 May 2013, 11:00 - Location: C-29

Speakers:
Rossano Venturini
Referent:
Andrea Esuli

Given a set of integer keys from a bounded universe along with associated data, the dictionary problem asks to build a data structure able to answer efficiently two queries: membership and retrieval.
Membership has to tell whether a given element is in the dictionary or not; Retrieval has to return the data associated with the searched key.
This seminar will describe a recent result presented at ACM-SIAM SODA 2013 which provides time and space optimal solutions for three well-established relaxations of this basic problem: (Compressed) Static functions, Approximate membership and Relative membership.

NOTE: This seminar is the second one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Rossano Venturini placed third in the Young researcher category.

A new probabilistic model for IR

07 May 2013, 11:00 - Location: C-29

Speakers:
Richard Connor (University Of Strathclyde, Glasgow, Uk)
Referent:
Fausto Rabitti

Over the years a number of competing models have been introduced attempting to solve the central IR problem of ranking documents given textual queries. These models, however, tend to require the inclusion of heuristics and the estimation of collection-specific parameter values in order to be effective. We define a new model that we do not believe has yet been explored. In terms of the categorisation of IR models, it is a probabilistic model and has no term inter-dependencies, thus allowing calculation from inverted indices. It is based upon a simple core hypothesis, directly calculating a ranking score in terms of probability theory and does not require the estimation of any parameters. We show initial tests in comparison with a number of standard baseline IR models, and show that the new model is at least credible, often outperforming the Language Model with Dirichlet smoothing.

Our contributions are twofold: first, we believe the new model is worthy of further investigation and that its performance could be improved significantly; and secondly, we believe the observation that the Jensen-Shannon metric can be evaluated over inverted indices in a sparse space is also more generally applicable.

DEMON: Uncovering Overlapping Communities with a Local-First Approach

23 April 2013, 15:00 - Location: C-29

Speakers:
Giulio Rossetti
Referent:
Giulio Rossetti

Community discovery in complex networks is an interesting problem with a number of applications, especially in the knowledge extraction task in social and information networks. However, many large networks often lack a particular community organization at a global level. In these cases, traditional graph partitioning algorithms fail to let the latent knowledge embedded in modular structure emerge, because they impose a top-down global view of a network. DEMON is a simple local-first approach to community discovery, able to unveil the modular organization of real complex networks. This is achieved by democratically letting each node vote for the communities it sees surrounding it in its limited view of the global system, i.e. its ego neighborhood, using a label propagation algorithm; finally, the local communities are merged into a global collection.

NOTE: This seminar is the first one of the series of six seminars presented by the winners of the prize "Young researchers ISTI 2013". Giulio Rossetti placed second in the PhD student category.

Designing conceptual database models for innovative evaluation of quality

22 April 2013, 12:00 - Location: C-29

Speakers:
Elvira Locuratolo
Referent:
Elvira Locuratolo

The approach to introduce innovative evaluations of quality is based on two different methods: the former is a vertical mapping which starts from the formal specification of a database application and achieves the conceptual model of the ASSO methodology. The latter is a horizontal mapping which starts from a graph of conceptual classes and achieves the resulting model of the previous approach. The innovative evaluation is based on the following points:

  • Achievement of the quality desiderata.
  • Numerical evaluation of the consistency costs.
  • Conceptual evaluation of the resulting model.

The Numerical evaluation of the consistency costs expresses the cost of what has been explicitly specified/proven in terms of variable and constant cardinality. The conceptual evaluation of the model is a set of hidden classes which are implicitly specified within graphs of conceptual classes. This evaluation expresses the saving that you get for what has been implicitly specified/proved. The quantitative evaluation of the consistency costs can be determined starting from the conceptual evaluation of the model. The vice-versa is not possible.

Analysis and Test of the Effects of Single Event Upsets in the Configuration Memory of SRAM-based FPGAs

05 April 2013, 10:00 - Location: C-29

Speakers:
Luca Maria Cassano
Referent:
Luca Maria Cassano

SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. Designing safety-critical applications requires accurate methodologies to evaluate the system’s sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. In this talk we present a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs; (ii) UA2TPG: Untestability Analyzer and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs.

Simulazione e sistemi IDroinformatici per la Gestione delle Risorse Idriche

15 March 2013, 14:30 - Location: C-29

Speakers:
Claudio Schifani, Rudy Rossetto (Scuola Superiore Sant'Anna)
Referent:
Paolo Mogorovich

La gestione delle risorse idriche, attualmente soggette ad una crescente pressione antropica ed alle crisi ricorrenti legate ai cambiamenti climatici, costituisce una delle problematiche ambientali cui si deve porre maggiore attenzione. In questo senso numerose raccomandazioni relative alla necessità di un nuovo approccio verso le metodologie e procedure di gestione delle risorse idriche sono state emanate recentemente anche dall’Agenzia Europea dell’Ambiente. L’assenza di strumenti condivisi che permettano la gestione integrata della risorsa idrica conduce ad un approccio prettamente qualitativo che spesso genera conflitti tra i vari utilizzatori di difficile risoluzione per le autorità competenti.

Inoltre, la tradizionale separazione nell’analisi quantitativa dei bilanci idrici fra le acque superficiali e quelle sotterranee conduce spesso a valutazioni non condivise sulla disponibilità della risorsa idrica. Non è così possibile valutare gli impatti legati alle variazioni dell'uso del suolo e dei cambiamenti climatici sul ciclo dell’acqua e le relative conseguenze sui sistemi socio-economici e naturali. In questo scenario si colloca il progetto di ricerca SID&GRID (Simulazione e sistemi IDroinformatici per la Gestione delle Risorse Idriche), il cui obiettivo principale è di progettare e sviluppare un framework per l’integrazione del mondo GIS con la modellistica idrologica ed idrogeologica, attraverso l'operatività di un DSS (Decision Support System) per la pianificazione e gestione condivisa degli usi della ricorsa idrica da parte di Enti pubblici e società preposte a tale ruolo.

Abstract Modelling and On-the-fly model checking

13 March 2013, 15:00 - Location: C-29

Speakers:
Franco Mazzanti
Referent:
Franco Mazzanti

The seminar will present the underlying basic ideas and the current status of the UMC/FMC/CMC/VMC modellling and verification framework developed at ISTI. A railway scenario will be used to illustrate in more detail the features of UMC.

L'uso delle Reti di Petri per la modellazione della circolazione ferroviaria

20 February 2013, 14:30 - Location: C-29

Speakers:
Stefano Ricci (Università degli Studi di Roma "La Sapienza")

Cenni sulla Teoria delle Reti di Petri. Algebra e formalismo grafico delle Reti di Petri, Formulazione analitico matriciale, Situazioni fondamentali in una Rete di Petri,Tipologie di Reti di Petri, Estensioni della teoria delle Reti di Petri e problemi di standardizzazione. Le Reti di Petri utilizzabili nelle applicazioni ferroviarie. Reti di Petri estese, Proprietà estese delle marche, Proprietà estese delle transizioni, Proprietà estese dei posti , Reti di Petri e modelli gerarchici, Reti di Petri colorate. Architettura dei modelli per le analisi dell'esercizio ferroviario. Obiettivi e limiti dei modelli di simulazione, Struttura "object oriented", Reti estese mediante programmazione, Reti colorate e finestra di simulazione, Limiti nella possibilità di rappresentazione e simulazione, Elementi rappresentati e Classi costitutivi dei modelli, Funzionamento dei modelli. Esempio di modello per la tratta Roma-Formia. Descrizione della linea fini della sua rappresentazione nel modello, Cenni sui sistemi di comando e controllo, Rappresentazione nel modello, Simulazione dell'orario teorico nelle ore di punta, Simulazioni in regime perturbato nelle ore di punta, Effetto della cinematica del treno sulle simulazioni. Altre possibili applicazioni. Analisi dei passaggi a livello, Valutazione della capacità di circolazione nei nodi, Risoluzione dei conflitti ai nodi. Conclusioni. Obiettivi raggiunti, lacune da colmare, applicazioni possibili, sviluppi futuri.

Extended Visual Tracking for Video Analytics under the Bayesian Probabilistic Graphical Framework

15 February 2013, 11:30 - Location: C-29

Speakers:
Mauricio Soto Alvarez (Technical Research Centre of Finland)
Referent:
Fabrizio Falchi

Visual tracking represents the basic processing step for most Video Analytics applications where the aim is to automatically understand the actions occurring in a monitored scene. Consequently, the performances of these applications are significantly dependent on the accuracy and robustness of the tracking algorithm. Bayesian state estimation and Probabilistic Graphical Models (PGMs) have proved to be very powerful and appropriate mathematical tools to efficiently solve the inference problem of motion estimation by combining object dynamics and observations. In this seminar, an efficient algorithm for Extended Visual Object Tracking (EVOT) is shown. The word extended refers to the fact that the tracker exploits multiple measurements yield by each target. In particular, the presented technique integrates local interest points with global color in order to provide a rich representation of the target and a robust tracker developed under the Bayesian Probabilistic Graphical Model framework.

Stochastic modelling of Signalling Pathways

11 February 2013, 15:00 - Location: C-29

Speakers:
Davide Chiarugi
Referent:
Ercan Kuruoglu

Signalling pathways are a set of inerconnected biochemical reactions that, in living organisms, are dedicated to the transduction of signals coming from the environment. They represent fundamental structures for all living entities, ranging from prokaryotes to eukaryotic cells. Due to their pivotal role, in the last decades signalling pathways have been intensively studied. Particular attention has been devoted to eukaryotic pathways because of their role in cancer etiology and pathogenesis. In this context computational modelling is becoming increasingly important. Indeed "in silico" models can provide usefull insights on the dynamics of signalling pathways complementing and organising the information obtained "in vitro".

Recently there has been significant interest in stochastic models of biochemical systems, mainly because of experimental evidences that stochasticity at the molecular level plays an important role in determining the overall behaviour of living organisms. In this talk we will report on the methods and algorithms that we are using or we are going to use for developing stochastic models of signalling pathways. In particular we will describe some of the obtained results as well as problems and possible solutions emerging in our challenging research topic.

BAT-Framework: a framework to compare text-annotators

01 February 2013, 11:00 - Location: C-29

Speakers:
Marco Cornolti (Dipartimento di Informatica, Università di Pisa )

We have implemented a benchmarking framework for fair and exhaustive comparison of entity-annotation systems. The framework is based upon the definition of a set of problems related to the entity-annotation task (which formalize the primary issues arising in this task), a set of measures to evaluate systems performance, and the inclusion of all publicly available datasets, containing texts of various types: news, tweets and Web pages. Our framework is easily-extensible with novel entity annotators, datasets and evaluation measures for comparing systems, and it will be released to the public. The intent of this framework is to set the ground for further developments in this challenging task, being a basis on which new annotators can be developed.

Using Collective Intelligence to Detect Natural Language Pragmatic Ambiguities

19 December 2012, 10:30 - Location: C-29

Speakers:
Alessio Ferrari
Referent:
Alessio Ferrari

We present a novel approach for pragmatic ambiguity detection in natural language (NL) requirements specifications defined for a specific application domain. Starting from a requirements specification, we use a Web-search engine to retrieve a set of documents focused on the same domain of the specification. From these domain-related documents, we extract different knowledge graphs, which are employed to analyse each requirement sentence looking for potential ambiguities. To this end, an algorithm has been developed that takes the concepts expressed in the sentence and searches for corresponding "concept paths" within each graph. The paths resulting from the traversal of each graph are compared and, if their overall similarity score is lower than a given threshold, the requirements specification sentence is considered ambiguous from the pragmatic point of view. A proof of concept is given throughout the presentation to illustrate the soundness of the proposed strategy.

Run-time Failure Forecasting for Dynamically Evolving Systems

05 December 2012, 15:00 - Location: C-40

Speakers:
Andrea Polini
Referent:
Andrea Polini

no abstract.

Analysing Robot Swarm Decision-making with Bio-PEPA

21 November 2012, 10:30 - Location: C-29

Speakers:
Mieke Massink
Referent:
Mieke Massink

A formal approach to analyse swarm robotics systems is presented based on Bio-PEPA. Bio-PEPA is a process algebraic language originally developed to analyse biochemical systems. Its main advantage is that it allows different kinds of analyses of a swarm robotics system starting from a single description. In general, to carry out different kinds of analysis, it is necessary to develop multiple models, raising issues of mutual consistency. With Bio-PEPA, instead, it is possible to perform stochastic simulation, fluid flow analysis and statistical model checking based on the same system specification. This reduces the complexity of the analysis and ensures consistency between analysis results. Bio-PEPA is well suited for swarm robotics systems, because it lends itself well to modelling distributed scalable systems and their space-time characteristics. We demonstrate the validity of Bio-PEPA by modelling collective decision-making as an emergent phenomenon in a swarm robotics system and we evaluate the result of different analyses.

Localizzazione in interni basata su RSS

20 November 2012, 14:30 - Location: C-29

Referent:
Francesco Potortì

Il seminario sarà un excursus sulle passate e presenti attività di ricerca del gruppo Wnlab nel campo della localizzazione in interni e nel riconoscimento dell'attività di persone. Entrambe le attività sfruttano la misura della potenza del segnale ricevuto (RSS) da piccoli ed economici sensori equipaggiati con radio IEEE 802.15.4 (Zigbee).

Le attività future vanno attualmente concentrando nello studio dei sistemi passivi, che non richiedono cooperazione da parte della persona localizzata.

Product Line Engineering Applied to CBTC Systems Development

07 November 2012, 10:30 - Location: C-40

Speakers:
Oronzo Spagnolo
Referent:
Oronzo Spagnolo

Communications-based Train Control (CBTC) systems are the new frontier of automated train control and operation. Currently developed CBTC platforms are actually very complex systems including several functionalities. Furthermore, every installed system, developed by a different company, varies in extent, scope and number of them. International standards have emerged, but they remain at a quite abstract level, mostly setting terminology. The paper that will be presented into seminar reports intermediate results in an effort aimed at defining a global model of CBTC, by mixing semi-formal modelling and product line engineering. The effort has been based on an in-depth market analysis, not limiting to particular aspects but considering as far as possible the whole picture. The adopted methodology is discussed.

The Elegant Search Universe: superstrings and hidden dimensions

17 October 2012, 11:30 - Location: C-29

Speakers:
Domenico Saccà (Dipartimento DEIS, Università della Calabria)
Referent:
Fosca Giannotti

Real-time believable stereo and virtual view synthesis engine for autostereoscopic display

17 October 2012, 09:30 - Location: I-53

Speakers:
Jiang Wang (Ph.D degree in Chinese Academy of Sciences in 2006; currently ERCIM fellow at NTNU, Norway)
Referent:
Roberto Scopigno

An FPGA and VLSI oriented stereo and virtual view synthesis engine is designed for the autostereoscopic display in the distributed multimedia plays application. To acquire the believable rendering not only for the stereo but also for the distant virtual view synthesis, the depth image based rendering (DIBR) is applied in the referenced depth and texture images. The homogeneous depth continuity is statistically analyzed and the corresponding texture is compensated for the hole filling. To meet the real time constraints, computation and memory access intensive modules are realized by hardware accelerating, and the processing is in raster scan pixel line stream rather than frame buffer bulk or macro block line. The simulation and implementation show that based on the correlation of depth in referenced images, the believable rendering is acquired in real time. Also, there exists the challenges for complicated scenario rendering.

The Renaissance of Logic-based Languages and Systems--Data Streams

16 October 2012, 16:30 - Location: C-29

Speakers:
Carlo Zaniolo (UCLA Computer Science Department, Los Angeles)
Referent:
Fosca Giannotti

Social computing: exploiting the human factor in social media data management.

12 October 2012, 11:00 - Location: C-29

Speakers:
Matteo Magnani (Data Intensive Systems group, Dept. of Computer Science, Aarhus University, Denmark)
Referent:
Fosca Giannotti

One of the reasons behind the tremendous success of Social Network Analysis (SNA) methods in various research disciplines is a very general and simple graph model that enables the representation and study of extremely heterogeneous scenarios, ranging from workplace dynamics to the spreading of diseases or hyper-text documents in the World Wide Web. While this generality still constitutes a great value, it has recently become apparent that to model specific contexts and to enable accurate analyses it may be important to enrich simple network models with additional modeling constructs representing the complex nature of human relationships. The talk will be organized in three parts. First, I will briefly illustrate some case studies showing how cultural parameters may influence social media data with respect to, e.g., information propagation. Then, I will show how these aspects may influence information retrieval activities, leading to the definition of "conversation retrieval" queries. Finally, I will conclude this presentation of human-influenced data modeling and querying discussing the hypothesis that a single concept of social connection or social network is not sufficient to satisfy the sociability requirements of human beings. Decades before the advent of Social Network Sites this had already been theorized by Goffman and other researchers in social sciences, for which individuals (or actors) perform on multiple stages, creating a sort of sociologically fragmented personality whose different components relate to different audiences (and thus networks). This has recently led to the definition of multi-layer (or multi-modal) network models for the analysis of social media data.

Biography of the speaker: Matteo Magnani graduated in Computer Science at the University of Bologna in 2002 (110/110 with mention). He studied at the University of Marne la Vallée (undergraduate level) and the Imperial College London (postgraduate research level). In 2006 he obtained a PhD in Computer Science (Bologna) where in 2011 he also graduated in Violin (110/110 with mention). He has received a Rotary Prize for the best student of the Science Faculty (UniBO), a Best Paper Award at ASONAM 2011, a Funniest Presentation award at SBP 2010, the French qualification for Maître de Conférence positions, the Italian "idoneità" for CNR researcher positions and his mother is very proud of him (or at least this is what she officially says). Until May 2012 he was a researcher (RTD) at the Dept. of Computer Science, University of Bologna and he currently holds a position at research assistant professor level at the Data Intensive Systems group, Dept. of Computer Science, Aarhus University, Denmark.

His main research interests span Database and Information Management systems, specifically uncertain information management and multidimensional database queries, and Social Computing. He has written around 1.5 Kg of papers on these topics (when printed on heavy A4 size sheets). He is currently the joint coordinator of the #sigsna research group on social network analysis, and has successfully attracted funding from Working Capital (Telecom Italia), PRIN and FIRB (MIUR - Italian Ministry for education, University and Research) schemes.

Selecting stars: fast and accurate computation of representative skylines for finding interesting records under multiple criteria.

11 October 2012, 11:00 - Location: C-29

Speakers:
Matteo Magnani (Data Intensive Systems group, Dept. of Computer Science, Aarhus University, Denmark )
Referent:
Fosca Giannotti

Which basketball player has been the Most Valuable Player in the last NBA season? Which materials should we use to build a product in a short time, that is cheap but is still appreciated by our customers? Which hotels are cheap but also not too far from the beach? These are all examples of queries involving multiple criteria, e.g., (1) cost and (2) distance to the beach. Only if the importance of each criterion is known (how far am I willing to walk for each saved euro?) we can compute a ranking of the best records (hotels). Otherwise, the "skyline" (aka Pareto front) consists of all the best records for all possible weighing of the criteria, thereby giving an overview over all the best options in the data.

In this talk I will introduce the skyline database operator for the selection of records under multiple criteria. Then I will discuss two important limitations of this operator: skyline queries can produce a very large result and require high computation time. I will thus present the main status of the research on the fast selection of "stars", i.e., a small set of records providing a good representation of the whole skyline.

Biography of the speaker: Matteo Magnani graduated in Computer Science at the University of Bologna in 2002 (110/110 with mention). He studied at the University of Marne la Vallée (undergraduate level) and the Imperial College London (postgraduate research level). In 2006 he obtained a PhD in Computer Science (Bologna) where in 2011 he also graduated in Violin (110/110 with mention). He has received a Rotary Prize for the best student of the Science Faculty (UniBO), a Best Paper Award at ASONAM 2011, a Funniest Presentation award at SBP 2010, the French qualification for Maître de Conférence positions, the Italian "idoneità" for CNR researcher positions and his mother is very proud of him (or at least this is what she officially says). Until May 2012 he was a researcher (RTD) at the Dept. of Computer Science, University of Bologna and he currently holds a position at research assistant professor level at the Data Intensive Systems group, Dept. of Computer Science, Aarhus University, Denmark.

His main research interests span Database and Information Management systems, specifically uncertain information management and multidimensional database queries, and Social Computing. He has written around 1.5 Kg of papers on these topics (when printed on heavy A4 size sheets). He is currently the joint coordinator of the #sigsna research group on social network analysis, and has successfully attracted funding from Working Capital (Telecom Italia), PRIN and FIRB (MIUR - Italian Ministry for education, University and Research) schemes.

Using Collective Intelligence to Detect Pragmatic Ambiguities

14 September 2012, 11:30 - Location: C-40

Speakers:
Alessio Ferrari
Referent:
Alessio Ferrari

We present a novel approach for pragmatic ambiguity detection in natural language (NL) requirements specifications defined for a specific application domain. Starting from a requirements specification, we use a Web-search engine to retrieve a set of documents focused on the same domain of the specification. From these domain-related documents, we extract different knowledge graphs, which are employed to analyse each requirement sentence looking for potential ambiguities. To this end, an algorithm has been developed that takes the concepts expressed in the sentence and searches for corresponding "concept paths" within each graph. The paths resulting from the traversal of each graph are compared and, if their overall similarity score is lower than a given threshold, the requirements specification sentence is considered ambiguous from the pragmatic point of view. A proof of concept is given throughout the presentation to illustrate the soundness of the proposed strategy.

High-quality parametrization and quadrangualation

10 September 2012, 14:00 - Location: C-29

Speakers:
Denis Zorin (New York University )
Referent:
Nico Pietroni

In this talk I will review recent progress in global parametrization and quadrangulation algorithms, focusing on our recent work emphasizing optimization of parametrization quality. We have pursued several directions, the two main ones being the choice of topological structure of the global parametrization to minimize distortion, and the second one is controlling conformal and more generally isometric distortion locally. Global parametrization of surfaces requires singularities (cones) to keep distortion minimal. In general, the problem of choosing cone locations and associated cone angles minimizing distortion is a special case of a general nonlinear mixed-integer problem which does not admit efficient computational solutions. We have designed an approximate method based on the idea of evolving the metric of the surface, starting with the original metric so that a growing fraction of the area of the surface is constrained to have zero Gaussian curvature; the curvature becomes gradually concentrated at a small set of vertices which become cones. We demonstrate that the resulting parametrizations have significantly lower metric distortion compared to previously proposed methods. The second direction is aiming to design efficient algorithms for controlling local distortion, specifically, deviation of the map from conformality. Our approach is based on the theory of extremal quasiconformal maps, i.e. maps, that for given set of constraints (e.g. fixed boundaries or points in the interior) minimize the worst-case distortion. Remarkably such maps are unique in many relevant cases, and have a number of properties that allows us to compute them efficiently.

Security Testing of Web Applications from a Secure Model

06 July 2012, 11:00 - Location: C-29

Speakers:
Matthias Buechler (Technische Universität München Institute of Informatics)
Referent:
Eda Marchetti

Web applications are a major target of attackers. The increasing complexity of such applications and the subtlety of today’s attacks make it very hard for developers to manually secure their web applications. Penetration testing is considered an art; the success of a penetration tester in detecting vulnerabilities mainly depends on his skills. Recently, model-checkers dedicated to security analysis have proved their ability to identify complex attacks on web-based security protocols. However, bridging the gap between an abstract attack trace output by a model-checker and a penetration test on the real web application is still an open issue. We present here a methodology for testing web applications starting from a secure model. First, we mutate the model to introduce specific vulnerabilities present in web applications. Then, a model-checker outputs attack traces that exploit those vulnerabilities. Next, the attack traces are translated into concrete test cases by using a 2-step mapping. Finally, the tests are executed on the real system using an automatic procedure that may request the help of a test expert from time to time. A prototype has been implemented and evaluated on WebGoat, an insecure web application maintained by OWASP. It successfully reproduced Role-Based Access Control (RBAC) and Cross-Site Scripting (XSS) attacks.

Chisel: A Cultural Heritage Information System based on Extended Layers

13 June 2012, 10:00 - Location: I-53

Speakers:
Francisco Soler, Maria Victoria Luzón (ETS Ingenierías Informática y de Telecomunicación, Univ. of Granada, Spain)
Referent:
Roberto Scopigno

Dealing with cultural heritage sites or artifacts usually involves a large volume of heterogeneous information to be managed, accessed and analyzed by specialists. In this way we present Chisel: a cultural heritage information system based on information layers. To organize information, Chisel makes use of the traditional GIS technology but applying it on 3D meshes from cultural heritage artifacts instead of 2D maps.

Shape Analysis with Correspondences

08 June 2012, 10:00 - Location: C-29

Speakers:
Michael Wand (MPI - Saarbrücken)
Referent:
Nico Pietroni

The talk will discuss recent work that has been done within the "Statistical Geometry Processing" group at MPI Informatics and Saarland University. We are working on "shape understanding" algorithms, i.e., algorithms that aim at discovering structure in geometric data sets and utilize it for analysis and modeling. Humans understand shapes already at an intuitive level. However, finding a formal model that explains "structure" in shapes is a major scientific challenge. In addition to being able to capture meaningful aspects, the models also need to be simple enough to permit efficient and robust algorithms for discovering such structure in data.

The talk will focus on correspondence analysis as one low-level approach to this problem: First, we establish correspondences between shapes, i.e., detect pieces of geometry that are essentially similar and relate these to each other. I will discuss various techniques to efficiently and robustly compute correspondences between shapes, allowing for different types of invariance. Second, we can go up one level of abstraction and look at the structure of the obtained correspondences: Assuming we have discovered multiple, potentially overlapping pairs of regions of equivalence within a piece of geometry, what does this tell us about the shape? This question is addressed by "inverse procedural modeling" techniques that characterize families of shapes that are similar to an example piece. We use correspondence information to derive shape docking rules and, alternatively, algebraic invariants to describe such shape spaces in a constructive manner.

PolarityRank: Finding an equilibrium between followers and contraries in a network

06 June 2012, 11:00 - Location: C-29

Speakers:
Fermín Cruz Mata (Departamento de Lenguajes y Sistemas Informáticos - Universidad de Sevilla (visiting ISTI-CNR throughout June 2012) )

In this talk I will present the random-walk ranking algorithm named PolarityRank. This algorithm is inspired in PageRank, and generalizes it to deal with graphs having not only positive but also negative weighted arcs. Besides the definition of the algorithm, I will show a pair of problems we have addressed through it: automatic expansion of opinion lexicons and trust and reputation computation in social networks. I intend to convey the idea of which problems are suitable to be solved using PolarityRank. It can be summarized in three prerrequisites: the problem has to be representable using a graph, where nodes are some entities and arcs represent some relations between them; some a-priori information must be available for a subset of the entities, and also about the "similarities" and "differences" between them (in terms of what that information represents); and the solution of the problem might be seen as a propagation of the a-priori information, consisting of a set of induced values for the rest of entities.

The inductive print: An example from physical modeling for digital synthesis

23 May 2012, 11:00 - Location: C-29

Speakers:
John Granzow (Centre for Computer Research in Music and Acoustics, Stanford University )
Referent:
Matteo Dellepiane

The dynamics of musical instruments are simulated to approximate their waveform for digital synthesis. In addition to this largely deductive process, the development of the model also includes an inductive phase of listening where the sound of the existing instrument and the digital sound from the model are compared. When the desired outcome is known (i.e, a clarinet), the ear can fine-tune the model for perceptual dimensions such as timbre. Yet one of the primary advantages of physical modeling is to generate novel sounds from virtual geometries and dynamics that do not necessarily correspond to existing instruments or performance traditions. 3d printing is discussed as a means to quickly manufacture these novel acoustic instruments with limitations that are rapidly receding; now we can quickly hear both the digital synthesis derived from physics and the physical object printed from those very models.

Analyzing and Simulating Fracture Patterns in Theran Wall Paintings

30 March 2012, 14:30 - Location: C-29

Speakers:
Hijung Valentina Shin (Massachusetts Institute of Technology)
Referent:
Roberto Scopigno

We analyze the fracture patterns observed in wall paintings excavated at Akrotiri, a Bronze Age Aegean settlement destroyed by a volcano on the Greek island of Thera around 1630 BC. We use interactive programs to trace detailed fragment boundaries in images of manually reconstructed wall paintings. Then, we use geometric analysis algorithms to study the shapes and contacts of those fragment boundaries, producing statistical distributions of lengths, angles, areas, and adjacencies found in assembled paintings. The result is a statistical model that suggests a hierarchical fracture pattern, where fragments break into two pieces recursively along cracks nearly orthogonal to previous ones. This model is tested by comparing with simulation results of a hierarchical fracture process. The model could be useful for predicting fracture patterns of other wall paintings and/or for guiding future computer-assisted reconstruction algorithms.

Lightweight Approaches to Highly Adaptive Web Interfaces

02 March 2012, 11:00 - Location: C-29

Speakers:
Michael Nebeling (ETH Zurich)
Referent:
Fabio Paternò

The growing range and diversity of new devices makes it increasingly difficult for web developers to cater for the large variety of viewing and interaction contexts. While there is a great body of work on desktop-to-mobile adaptation, my research specifically looks at the adaptation to large, high-resolution display settings. Some of the contributions include a set of adaptivity metrics and an adaptive layout template to support more flexible web designs. The central idea behind the proposed solutions is to leverage and extend, rather than replace, existing web languages and to build, in particular, on new technologies such as HTML5 and CSS3. To further support developers, I have also investigated a crowdsourcing approach and designed different methods of complementing developer-specified adaptations with end-user contributions. In the talk, I would first like to give a general overview of my work and then present topics of potential interest in more detail.

Il prodotto SKOSware V1.0

24 February 2012, 14:00 - Location: C-40

Speakers:
Alessandra Donnini (Etcware S.r.l.)
Referent:
Roberto Trasarti

SKOSware è un sistema per la gestione condivisa e per la pubblicazione di thesaurus SKOS prodotto dalla EtcWare s.r.l. . SKOS (Simple Knowledge Organization System) è un formalismo per rappresentare appunto thesaurus, in cui i termini vengono identificati da URI.

SKOSware offre una interfaccia amministrativa per operare sui thesaurus e una interfaccia programmatica, API REST e API java per integrare la gestione thesaurus in CMS o applicazioni di knowledge management. Nel seminario verrà presentata una applicazione del sistema SKODware per la catalogazione e l'arricchimento semantico dei contenuti nel portale baseculturale.it. Il prodotto è stato sviluppato a valle di un progetto di ricerca finanziato dalla regione Lazio (bando POR FESR 2008-2013 Asse I).

La EtcWare è una Piccola Impresa Innovativa in crescita fondata da professionisti di vasta e diversificata esperienza nel settore ICT.

Automated quality assessment of volunteered geographic information: engaging citizens in the next generation Digital Earth. Case study on forest fire.

17 February 2012, 10:00 - Location: C-29

Speakers:
Laura Spinsanti (Joint Research Centre, Ispra)
Referent:
Chiara Renso

This seminar present a conceptual workflow for assessing the quality of geographic user-generated content (GUGC) or volunteered geographic information (VGI) in the context of crisis events like forest fires. We briefly recapitulate the main challenges in using VGI in the context of crisis management and our proposed approach to tackle them, and we present the proof-of-concept prototype CONAVI (CONtextual Analysis of Volunteered Information) that implements the core functionality: retrieval, geo-coding, content analysis, geographic context analysis, and spatio-temporal clustering of VGI from two sources, the micro-blogging service Twitter and the online photo-sharing service Flickr in the South Europe context.

Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure

07 February 2012, 11:00 - Location: C-29

Speakers:
Oscar Täckström (SICS / Uppsala University, Sweden)

The ability to predict the linguistic structure of sentences or documents is central to the study of natural language processing. While annotated resources for parsing and several other tasks are available in a number of languages, we cannot expect to have access to labeled resources for all tasks in all languages. In this talk I will describe how cross-lingual word clusters can be used as a way to sidestep this problem, focusing on the important tasks of syntactic dependency parsing and named-entity recognition (NER). First, I will show how monolingual word clusters can be used to improve parsing and NER for a range of different languages, across families. I will then describe an algorithm for inducing cross-lingual word clusters using large corpora and word alignments and how these clusters can significantly improve the accuracy of cross-lingual structure prediction. Specifically, I will show how an English dependency parser and NER system can be transferred to a range of other languages, without any need for target language training data.

I limiti di acquisizione e visione delle immagini HDR come chiave di lettura dell'intera pipeline di imaging ad alta dinamica

05 December 2011, 15:15 - Location: C-29

Speakers:
Alessandro Rizzi (Dipartimento di Informatica e Comunicazione, Università degli Studi di Milano )
Referent:
Roberto Scopigno

Il passaggio dall'imaging tradizionale a quello ad alta dinamica non consiste semplicemente nell'aumentare il numero di bit del singolo pixel. Nel passaggio verso una immagine in grado di acquisire, trattare e riprodurre dettagli in un vasto range di luminanza ci si scontra con una serie di limitazioni ottiche molto severe tipiche di ogni sistema di imaging, incluso il nostro sistema visivo. Il seminario vuole presentare i limiti nella dinamica, che abbiamo misurato, sia del nostro sistema visivo che di vari sistemi di acquisizione di immagini, e partendo da questi proporre e discutere un approccio alternativo per la gestione delle immagini HDR.

The brain side of Information Technology: an exploration with low cost EEG devices

05 December 2011, 14:30 - Location: C-29

Speakers:
Daniele Marini (Dipartimento di Informatica e Comunicazione, Università degli Studi di Milano)
Referent:
Roberto Scopigno

An emerging problem in IT is how human communication, mediated by computer systems, evolves in new and unexpected ways. Diffusion of social networks, virtual reality, videogames, e-learning, mobile communication are changing the way, in particular, teen and young people interact and exchange experience and emotions. To understand and explore the effects on human beings of the new communication paradigms is a challenge that can be tackled by studying it by multiple viewpoints, from perception to high cognitive and psychological levels and recurring to advanced signal processing methods. The availability of low cost Electroencephalography (EEG) recording devices allows us to perform a large amount of experiments, providing the possibility of sound data interpretation, and setting up simulated and real environmental experiment set up. We are less interested in the study of non-elementary perceptual phenomena, rather focused to study interpersonal or human-computer communication processes, by observing brain behaviour through EEG signal analysis. As a matter of fact, visual and auditory perception, still remain the primary means of such communication processes, but higher level brain phenomena are possibly more appealing. Experiments on EEG already demonstrate that not only a visual, motor or auditory stimulus can be detected, but also the imagination of the stimulus itself can also be detected, so opening the road to brain computer interaction. In this paper we will present some preliminary results from experiments on music, colour perception and mono vs. stereo video viewing.

Regression Testing of Web Service: A Systematic Mapping Study

01 December 2011, 11:00 - Location: C-29

Speakers:
Bixin Li (Southeast University, Nanjing, China)
Referent:
Antonia Bertolino

Web Service is a mature and widely-used implementation technique under the new paradigm of Service-Oriented Architecture (SOA). This work aims to identify any gap in current research and suggest some fruitful areas for further study on regression testing of Web Service. We perform a broad manual and automatic search on publication from many journals, conference/workshop proceedings in selected electronic databases published from 2000 to 2010. A total of 18 papers have been selected as primary studies for answering our research questions. Our results indicate that service integrator is the most important stakeholder of regression testing. Uncontrollability of service evolution, non-observability of service source code, high testing cost and concurrency issue are most cited difficulties in regression testing. To overcome these challenges, test case prioritization and selection are the most commonly proposed techniques where thirty-one prioritization strategies and eight selection methods have been published, respectively. Finally, it has been observed that many studies have not been theoretically proven or experimentally analyzed. Although we have found only 18 studies that address issues of regression testing in the context of Web Service, they represent a reasonable and representative sample to understand the state of research in this important area of software engineering. The results of this study provide a body of knowledge that clearly illustrates gaps in improving the quality of regression testing techniques for Web Service.

Analysis of aa-tRNA competition: An application of Prism in systems biology

25 November 2011, 11:00 - Location: C-29

Speakers:
Erik de Vink (Eindhoven University of Technology & CWI Amsterdam, The Netherlands)
Referent:
Maurice ter Beek

We present a formal analysis of ribosome kinetics using probabilistic model checking with the tool Prism. We compute different parameters of the model, like probabilities of translation errors and average insertion times per codon. The model predicts a strong correlation to the quotient of the concentrations of the so-called cognate and near-cognate tRNAs, in accord with experimental findings and other studies. Using piecewise analysis of the model, we are able to give an analytical explanation of this observation.

Towards a Universal Information Distance for Structured Data

15 November 2011, 11:00 - Location: C-29

Speakers:
Richard Connor (University of Strathclyde - Scotland, United Kingdom)
Referent:
Fausto Rabitti

The similarity of objects is one of the most fundamental concepts in any collection of complex information; similarity, along with techniques for storing and indexing sets of values based on it, is a concept of ever increasing importance as inherently unordered data sets become ever more common. Examples of such datasets include collections of images, multimedia, and semi-structured data. There are however two, largely separate, classes of related research. On the one hand, techniques such as clustering and similarity search give general treatments over sets of data. Results are domain-independent, typically relying only on the existence of an anonymous distance metric over the set in question.

On the other hand, results in the domain of similarity measurement are often limited to the context of pairwise comparison over individual objects, and are not typically set in a wider context. Published algorithms are scattered over various demand-led subject areas, including for example bioinformatics, library sciences, and crime detection. Few, if any, of the published algorithms have the distance metric properties.

We have identified a distance metric, Ensemble Distance, which we believe can help to bridge this gap. Strongly grounded in information theory, Ensemble Distance is a vector-based distance metric which we believe can be used in the treatment of many classes of structured data. For any complex type where a useful characterisation exists in the form of an ensemble, we can produce a distance metric for that type.

Dynamic Variability in Families of Clouds

14 November 2011, 11:00 - Location: C-29

Speakers:
Jorge Fox (ERCIM fellow, FMT, ISTI-CNR)
Referent:
Maurice ter Beek

The use of a cloud service requires a company to trust the vendor to comply with given Service-level Agreements (SLA). This work proposes a framework for modelling and maintaining Quality of Service levels expressed as SLA's through the definition of families of clouds as software service line organisations. By systematically exploiting the commonalities within related cloud systems while managing variations for specific customers, we relate variability to SLA's facilitating their monitoring, controlling and enforcement. Coupling the abstraction capabilities of product line approaches and SLA's to ensure Quality of Service in Cloud Computing is a novel approach, which offers the possibility to support dynamic negotiation of QoS parameters.

In the long run, our work aims to provide a generic framework for non-functional requirements enforcement in the Cloud.

Modelling and Analysing Variability in Product Families

07 November 2011, 15:00 - Location: C-29

Speakers:
Maurice ter Beek
Referent:
Maurice ter Beek

We propose an emerging solution technique and a support tool for modelling and analysing variability in product families. First we illustrate how to manage variability in a formal framework consisting of a Modal Transition System (MTS) and an associated set of formulae expressed in the variability and action-based branching-time temporal logic vaCTL. We then introduce the tool VMC, which accepts as input the specification of a product family in a CCS-like process algebra, possibly with an additional set of variability constraints. We show how VMC can be used to automatically generate all valid products of this family, defined in the same process algebra, as well as to visualise the family (products) as modal (labelled) transition systems. Finally, we show how VMC can efficiently verify functional properties expressed as formulae in vaCTL, over products and families alike, by means of its on-the-fly model-checking algorithm.

This is joint work with FMT members Patrizia Asirelli, Alessandro Fantechi, Stefania Gnesi, Franco Mazzanti, and Aldi Sulova

Model-based Engineering of Distributed Systems with SPACE and Arctis

25 October 2011, 11:00 - Location: C-40

Speakers:
Peter Herrmann (Department of Telematics (ITEM) - Norwegian University of Science and Technology (NTNU) )
Referent:
Maurice ter Beek

Model-based software engineering techniques are suited for the rapid and cheap service development. Our approach SPACE and its tool set Arctis use collaborative models each describing a possibly distributed sub-service instead of a single physical component. An advantage of this proceeding is its high potential for reuse since many distributed systems in a certain domain are realized by a limited number of sub-services which, however, are composed in quite different ways. Such, we could achieve a reuse rate of about 70% in average for our developments.

As a modeling technique, we use UML2 collaborations and activities enabling service specifications in an intelligible graphical notation. Our modeling tool Arctis transforms the collaborative models into Java code running among others on J2EE, Sun SPOTS and Google Android telephones. We can further apply model checking to analyze the models for design errors. This verification is carried out in a way that the user does not need any knowledge about the formalism as the errors are directly visualized on the graphical descriptions.

My presentation will introduce SPACE and Arctis and discuss model transformation and model checker-based analysis.

Adoption of SysML by a Railway Signaling Manufacturer

17 October 2011, 11:00 - Location: C-29

Speakers:
Alessio Ferrari
Referent:
Maurice ter Beek

This talk reports the experience of a railway signaling manufacturer in introducing the SysML notation within its development process by means of the TOPCASED tool. Though the tool was later substituted by MagicDraw, the experience was useful to understand the potentials of the notation for requirements formalization and analysis, together with the advantages and drawbacks of using an open-source tool in an industrial setting.

Interactive Visualization of Large Terrains on Mobile Devices, with applications

14 October 2011, 09:15 - Location: I-53

Speakers:
José María Noguera (University of Jaén (Spain), Department of Computer Sciences, Graphics and Geomatics Research Group. )
Referent:
Roberto Scopigno

The recent advent of low energy GPUs has boosted the graphics capabilities of mobile devices, such as smartphones, tablets or PDAs. However, this kind of devices still has several restrictions and limitations both in performance and in the storage of 3D data. In this talk, I will describe a client-server technique for remote adaptive streaming and rendering of huge 3D terrains on mobile devices. The technique has been designed to achieve an interactive rendering performance on resource-limited mobile devices connected to a low-bandwidth wireless network such as GPRS or UMTS. This technology opens new promising research lines in e-tourism. The ubiquitous nature of mobile devices turns them into an attractive platform for assisting on-the-move tourists to obtain 3D information according to their physical location. I will end this talk describing some applications to mobile e-tourism that we are currently working on.

Modeling Dynamic System Adaptation with Paradigm

14 October 2011, 11:00 - Location: C-29

Speakers:
Erik de Vink (Eindhoven University of Technology & CWI Amsterdam, The Netherlands)
Referent:
Maurice ter Beek

Dynamic system evolution of cooperating software components can be modeled in the Paradigm language. Paradigm provides layers of granularity, allowing to describe component interaction at a higher level of abstraction. A special component called McPal gradually changes the interaction at the abstract level, guiding the dynamics of the system from the old AsIs situation into the new ToBe situation without the need for a system shutdown.

Paradigm models can be systematically translated into the process algebra mCRL2. An extensive toolset is available for mCRL2, including LTS visualization and symbolic model checking. Thus, exploiting this connection, tool-supported verification of system migration has become within reach. The approach is illustrated for a simple producer/consumer example. If time allows a demo of the use of mCRL2 is included in this talk.

Joint work with Luuk Groenewegen and Suzana Andova.

FuTS - A Uniform Framework for the Definition of Stochastic Process Languages

07 October 2011, 11:00 - Location: C-29

Speakers:
Diego Latella
Referent:
Maurice ter Beek

Process Algebras are one of the most successful formalisms for modeling concurrent systems and proving their properties. Due to the growing interest in the analysis of shared-resource systems, stochastic process algebras have received great attention. The main aim has been the integration of qualitative descriptions with quantitative (especially performance) ones in a single mathematical framework by building on the combination of Labeled Transition Systems and Continuous-Time Markov Chains. In this talk a unifying framework is introduced for providing the semantics of some of the most representative stochastic process languages; aiming at a systematic understanding of their similarities and differences. The used unifying framework is based on so called State to Function Labelled Transition Systems, FuTSs for short, that are state-transition structures where each transition is a triple of the form (s,α,P). The first and the second components are the source state and the label of the transition while the third component, P, is the continuation function associating a value of a suitable type to each state s’. A non-zero value may represent the cost to reach s’ from s via α. The FuTSs framework is used to model represenative fragments of major stochastic process calculi and the costs of continuation function do represent the rate of the exponential distribution characterizing the execution time of the performed action. In the talk, first the FuTSs are introduced and then the semantics for a simple language used to directly describe (unlabeled) CTMCs.

Dealing with Uncertainty to Ensure Open System Dependability

05 October 2011, 11:00 - Location: C-40

Speakers:
Anatoliy Gorbenko (Department of Computer Systems and Networks (503) National Aerospace University "KhAI", Kharkiv, Ukraine )

Dealing with uncertainty inherent in the very nature of service-oriented architectures, is one of the main challenges in building dependable service architectures. This uncertainty needs to be treated as a threat in a way similar to and in addition to faults, errors and failures, traditionally dealt with by the research community. The lack of sufficient evidence about the characteristics of the communication medium, components and their possible dependencies makes it extremely difficult to achieve and predict (composite) service dependability which can vary over a wide range in a very random manner. This uncertainty of services running over the Internet and clouds exhibits itself through the unpredictable response times of messages and data transfers, difficulty to diagnose the root cause of failures, unknown common mode failures, etc.

Timed Automata as Observers of Stochastic Processes

27 September 2011, 11:00 - Location: C-40

Speakers:
Prof. Dr. Ir. Joost-Pieter Katoen (RWTH Aachen University, (Aquisgrana, Germania))
Referent:
Diego Latella

In this talk, we will argue that (deterministic) timed automata are a natural specification formalism for practically relevant measures on stochastic processes. It will be discussed how DTA specifications can be checked on continuous-time Markov chains, an important class of stochastic processes used in the performance and dependability community, by using a product construction. We'll provide encouraging empirical results for checking single-clock DTA specifications and indicate how parallelization as well as bisimulation minimization can be naturally exploited in this setting. Finally, we shortly discuss the model checking of DTA specifications on a richer class of stochastic processes: continuous-time Markov decision processes (CTMDPs).

WhatsUp: a P2P recommender

20 September 2011, 10:00 - Location: C-29

Speakers:
Anne-Marie Kermarrec (INRIA, Rennes (France))
Referent:
Ranieri Baraglia

WhatsUp is an instant news system aimed for a large scale network with no central bottleneck, single point of failure or censorship authority. Users express their opinions about the news items they receive by operating a like-dislike button. WhatsUp's collaborative filtering scheme leverages these opinions to dynamically maintain an implicit social network and ensures that users subsequently receive news that are likely to match their interests. Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a heterogeneous epidemic protocol that (a) biases the choice of the targets towards those with similar interests and (b) amplifies the dissemination based on the interest of every actual news item. The push and asymmetric nature of the network created by WhatsUp provides a natural support to limit privacy breaches.

The evaluation of through large-scale simulations, a ModelNet emulation on a cluster and a PlanetLab deployment on real traces collected both from Digg as well as from a real survey, show that WhatsUp consistently outperforms centralized and decentralized alternatives in terms of accurate and complete delivery of relevant news.

Anne-Marie Kermarrec is a senior researcher at INRIA Rennes where she leads the distributed systems group. She is the Principal Investigator of the ERC project Gossple and the chair of the ACM System software Award committee. She received the Monpetit award from the French Academy of Science in 2011. Before joining INRIA in February 2004, she was with Microsoft Research in Cambridge as a Researcher since March 2000. Before that, she obtained my Ph.D. from the University of Rennes (FRANCE) in October 1996 (thesis). She also spent one year (1996-1997) in the Computer Systems group of Vrije Universiteit in Amsterdam (The Netherlands) in collaboration with Maarten van Steen and Andrew. S. Tanenbaum. Her research interests are in peer to peer distributed systems, epidemic algorithms, social networks, Web Science."

Multispectral Image Classification for Urban Studies

07 July 2011, 10:30 - Location: C-29

Speakers:
Oscar Viveros (Università di Veracruz, Messico)
Referent:
Ercan Kuruoglu

Urban growth generates interesting problems in emerging economies, these problems are generally due to the lack of land planning. Remote Sensing has proved to be a good analysis technique adapted to study the urban growth.

In this talk, an information extraction methodologie will be shown, which is capable of extracting urban information from satellite images (SPOT and ERS-1). Also, the ways of implementation and some results will be exposed. The methodology is based on image classification and urban texture analysis focusing on pixel based probabilistic tools, taking into account the diffulties in obtaining the satellite data.

Web Service Contracts: Specification, Selection and Composition

07 July 2011, 14:00 - Location: C-29

Speakers:
Marco Comerio (Universita degli Studi Milano-Bicocca)
Referent:
Donatella Castelli

Web services promise universal interoperability and integration of services developed by independent providers to execute business processes by discovering and composing services distributed over the Internet. This means that a key factor to build complex and valuable processes among cooperating organizations relies on the efficiency of discovering appropriate Web services and composing them. The increasing availability of Web services that offer similar functionalities requires mechanisms to go beyond the pure functional discovery and composition of Web services.

A promising solution towards the automatic enactment of valuable processes consists in enhancing Web service discovery and composition with the evaluation of semantic contracts that define non-functional properties (NFPs) and applicability conditions associated with a Web service. Nevertheless, currently there is a lack of tools and algorithms that fully support this solution due to several open issues. First, existing languages do not show the expressivity necessary for Web service contract specifications. Second, there is a lack of standard languages that determines heterogeneity in Web service contract specifications raising interoperability issues. Third, pure semantic approaches to enhance Web service discovery allows for detailed descriptions but present performance problems due to current limitations of semantic tools when dealing with reasoning tasks. Fourth, Web service contract compatibility evaluation is not supported by existing composition tools when combining different services from different providers. This talk addresses these open issues and proposes solutions in terms of models, algorithms and tools.

Sketch-based constraints for procedural generation of 3D scenes

01 July 2011, 10:00 - Location: C-29

Speakers:
Patrick Marais (Department of Computer Science at the University of Cape Town (UCT))
Referent:
Roberto Scopigno

I will present a brief overview of the research activity on Computer Graphics of the Computer Science Department at the University of Cape Town, as well as some information on my own research interests. I will then detail the work I am doing with a colleague on sketch-based constraints for procedural generation. Specifically, I will review our previous work on "Terrain sketching" (2009) and present the current state of our work on "City Sketching". Both these techniques are intended to allow easy generation of content for VFX and computer games.

Gamification - I Videogiochi nella Vita Quotidiana

28 June 2011, 10:00 - Location: C-29

Speakers:
Fabio Viola (Consulente indipendente )
Referent:
Fabrizio Silvestri

Ogni anno nella sola Italia si spendono 1.1 miliardi di euro in videogiochi. Ogni mese 3 miliardi di ore vengono investite nel mondo video giocando. Ogni giorno sul social Game "Farmville" entrano oltre 20 milioni di persone. L'avvento di nuove piattaforme, dai mobile application stores a Facebook passando per le smart tv, ha allargato la base utenza facilitando l'alfabetizzazione di oltre 500 milioni di individui ormai familiari con termini come punti, livelli, vadge, virtual good e currency etc etc....

Oltre il 70% delle TOP 2000 corporation/organizzazioni si doterà di almeno un prodotto "gamificato" (Gartner) per risolvere problemi interni o creare engagement e loyalty. Analizzeremo con esempi concreti decine di business idea ed i modelli di monetizzazione

A universal model for mobility and migration patterns

24 June 2011, 15:00 - Location: C-29

Speakers:
Filippo Simini (Northeastern University)
Referent:
Fosca Giannotti

The gravity law, widely used to predict the number of commuters between two geographic locations or administrative areas, has played a central role in the past half century in epidemic forecast, urban planning and transportation research. Despite its widespread use, it relies on adjustable parameters that vary from region to region and some of its predictions are demonstrably at odds with reality. We introduce a stochastic process capturing local mobility decisions that helps us analytically derive an alternative to the gravity model. The parameter-free radiation model we derive predicts mobility patterns that are statistically indistinguishable from the real mobility and transport patterns in a wide range of phenomena, from hourly mobility and long-term migration patterns to commodity flows and the volume of communication between different regions. Given its parameter-free nature, the model can be applied in areas where we lack previous flux measurements, offering the potential to significantly improve the predictive accuracy of most of phenomena affected by transport and mobility patterns.

WURFL, The Wireless Universal Resource FiLe

23 June 2011, 11:30 - Location: C-29

Speakers:
Luca Passani (ScientiaMobile )
Referent:
Malko Bravi

Device Fragmentation è l'espressione indicata per identificare il fatto che non ci sia un telefonino o palmare uguale all'altro: larghezza dello schermo, sistema operativo, browser, larghezza di banda, codec e container per i contenuti video, sono solo esempi delle "dimensioni" in cui si esplica la diversità dei vari dispositivi mobili sul mercato.

Se oramai gli sviluppatori sanno come gestire le differenze tra i browser WWW tradizionali, creando pagine che appaiono uguali sugli schermi di tutti i PC, lo stesso non può essere detto per il "mobile web", fratello minore (ma non per molto) del web classico come lo conosciamo noi da anni. Per gli sviluppatori "mobile" la situazione è più complessa. Pochi hanno ancora trovato il mitico "silver bullet" (http://en.wikipedia.org/wiki/Silver_bullet#Idiomatic_usage) in grado di risolvere il problema in modo semplice ed efficace. Quelli che lo hanno fatto, lo devono per lo piu' a Luca Passani e al suo lavoro in quest'area nel decenio passato.

Luca, CTO di ScientiaMobile e creatore di WURFL, The Wireless Universal Resource FiLe, illustra il problema della Device Fragmentation e le soluzioni che sono state proposte negli anni (CC/PP, UAProf, Transcoders, etc...). Dopo aver illustrato i concetti di base delle DDR (Device Description Repository), Luca illustrerà WURFL ed alcuni case-studies che mostrano come WURFL ha risolto problemi nel mondo reale, incluso Telecom Italia a Facebook, anch'essi utilizzatori di WURFL.

FLIR camera demostration

15 June 2011, 10:00 - Location: C-29

Speakers:
Rene van Noort (FLIR Systems B.V.)
Referent:
Claudio Schifani

Technical discussion (particularly the combination/integration of the eligible FLIR products);

Extensive demostration/test of the camera (TAU), in combination with the different available lens-dimension, to find out what the best solution will be for your specific application. The difference in frame rates and demostration of practical accessories/features, software support, graphical user interface. Discussion of our (future) roadmap.

Efficient Diversification of Web Search Results

07 June 2011, 11:00 - Location: C-29

Speakers:
Fabrizio Silvestri
Referent:
Fabrizio Silvestri

In this paper we analyze the efficiency of various search re- sults diversification methods. While efficacy of diversification approaches has been deeply investigated in the past, response time and scalability issues have been rarely addressed. A unified framework for studying performance and feasibility of result diversification solutions is thus proposed. First we define a new methodology for detecting when, and how, query results need to be diversified. To this purpose, we rely on the concept of “query refinement” to estimate the probability of a query to be ambiguous. Then, relying on this novel ambiguity detection method, we deploy and compare on a standard test set, three different diversifica- tion methods: IASelect, xQuAD, and OptSelect. While the first two are recent state-of-the-art proposals, the latter is an original algorithm introduced in this paper. We evalu- ate both the efficiency and the effectiveness of our approach against its competitors by using the standard TREC Web diversification track testbed. Results shown that OptSelect is able to run two orders of magnitude faster than the two other state-of-the-art approaches and to obtain comparable figures in diversification effectiveness. This seminar will present results from the paper "Efficient Diversification of Web Search Results" that will be presented at the upcoming VLDB conference in Seattle. This is joint work with: Gabriele Capannini, Franco Maria Nardini, and Raffaele Perego.

Off the beaten track: using content-based multimedia similarity search for learning

19 April 2011, 11:30 - Location: C-29

Speakers:
Suzanne Little (Knowledge Media Institute, The Open University, UK )
Referent:
Davide Moroni

Electronic media is a valuable and ever increasing resource for information seekers and learners. So much information can be contained in a picture, explained by a diagram or demonstrated in a video clip. But how can you find what you are looking for if you don't understand it well enough to describe it? What can you do if you are faced with a mountain of multimedia learning material? Are there other ways of exploring open educational resources then sticking to the well defined paths of text search and hyperlinks?

This talk will present recent work applying content-based multimedia similarity search to find related educational material by using images to query a collection. It will describe the use of local features in images, 'keypoints', identified using an approach called Scale-Invariant Feature Transforms (SIFT), and the implementation of a nearest neighbour based indexing system to find visually similar images. The resulting content-based media search tool (cbms) has been applied in the context of the SocialLearn project [1] to help learners find and explore connected web pages, presentations or documents. It is also the basis of the Spot&Search [2] iPhone application that can be used to explore artwork installations on the OU Walton Hall campus.

[1] http://www.sociallearn.org

[2] http://spotandsearch.kmi.open.ac.uk

Similarity based financial networks: tracking investor's trading behavior

12 April 2011, 15:00 - Location: C-29

Speakers:
Fabrizio Lillo (Scuola Normale Superiore di Pisa, Dipartimento di Fisica, Università di Palermo & Santa Fe Institute, U.S.A.)
Referent:
Fosca Giannotti

Networks are a powerful tool to explore the interaction structure of many complex systems. In the vast majority of studied networks a link between two nodes identifies a direct interaction (e.g. trading activity, credit relation, etc) between the two nodes. Similarity based networks are a different class of networks where the presence of a link identifies a similarity between the two nodes. Here I consider two important applications of similarity based networks to financial markets. In the first one the nodes are stocks traded in a financial market and a link identifies a similarity in price dynamics. This network allows to identify clusters of stocks with similar price comovement and it is therefore useful in monitoring the whole market dynamics and in portfolio optimization. In the second case I consider similarity based networks of investors in the Finnish stock market. A suitably designed methodology allows to identify in an unsupervised way classes of investors with a common investment behavior. We study the investment behavior of these identified classes and we consider their interaction. In both cases a special emphasis is given to the behavior of similarity based financial networks in the presence of extreme market movements.

Il progetto Mattoni/Matrice, sistematizzazione di uso secondario di dati amministrativi del sistema sanitario.

30 March 2011, 11:00 - Location: C-29

Speakers:
Rosa Gini (ARS - Agenzia Regionale dela Sanità della Toscana )
Referent:
Massimo Coppola

Il seminario presenta il progetto Mattoni/Matrice dell'Agenzia Nazionale per i Servizi Sanitari Regionali (Agenas) nel quadro del programma Mattoni per la costruzione del sistema informativo del sistema sanitario nazionale (Nuovo Sistema Informativo Sanitario, NSIS).

Il progetto Matrice prevede il disegno, la sperimentazione, la validazione e l'applicazione ad alcuni casi di studio di un software open source di integrazione dei flussi sanitari amministrativi per produrre flussi secondari. I casi di studio saranno incentrati sulla gestione di alcune patologie croniche.

Nel seminario saranno descritte le caratteristiche dei dati amministrativi disponibili all'interno del sistema sanitario italiano, la struttura del progetto e il ruolo dei partner.

Towards an alternative to Gillespie's Algorithm for simulating biochemical reactions

15 March 2011, 11:00 - Location: C-29

Speakers:
Davide Chiarugi (Dipartimento di Scienze Matematiche e Informatiche, Università di Siena)
Referent:
Ercan Kuruoglu

Gillespie's Stochastic Simulation Algorithm (SSA) is probably the most used method for the stochastic simulation of biochemical reactions. The results of some recent wet-lab experiments on single enzyme molecules suggest that the theoretical framework on which the SSA grounds could not be adequate in the case of anzymatically catalysed reactions. Some proposals aiming at finding new methods for the stochastic simulation of biochemical reactions will be presented.

Looking at data through the eyes of a designer

01 March 2011, 15:00 - Location: C-29

Speakers:
Isabel Meirelles (Northeastern University, Boston, MA, USA )
Referent:
Fosca Giannotti

New technologies have increased the possibilities of communicative expression and expanded processes and procedures of accessing, organizing and communicating information. If in the past it was possible to manually structure and visualize data, nowadays computation methods are intrinsic to how we deal with very large data sets, whether in the design of exploratory analytical tools or for communication purposes. The term Big Data well expresses the state of the field and the challenges ahead of us. A central question is how to prepare not only future generations, but ourselves included, to deal with the data proliferation: from learning how to structure and analyze data to developing skills and methods for effectively visualizing information. It is critical to foster understanding of relationships between visual thinking, visual representation, and visual communication. How can we promote informed criteria to support the design process of data visualization?

Cloudy Weather for P2P, with a Chance of Gossip

21 February 2011, 12:00 - Location: C-29

Speakers:
Alberto Montresor (Università di Trento, Dipartimento di Ingegneria e Scienza dell'Informazione )
Referent:
Massimo Coppola

Peer-to-peer (P2P) and cloud computing, two of the Internet trends of the last decade, hold similar promises: the (virtually) infinite availability of computing and storage resources. But there are important differences: the cloud provides highly-available resources, but at a cost; P2P resources are for free, but their availability is shaky. Several academic and commercial projects have explored the possibility of mixing the two, creating a large number of \emph{peer-assisted} applications, particularly in the field of content distribution, where the cloud provides a highly-available and persistent service, while P2P resources are exploited for free whenever possible to reduce the economic cost. While executing active servers on elastic computing facilities like Amazon EC2 and pairing them with user-provided peers is definitely one way to go, this talk proposes a novel approach that further reduces the economic cost. Here, a passive storage service like Amazon S3 is exploited not only to distribute content to clients, but also to build and manage the P2P network linking them. An effort is made to guarantee that the read/write load imposed on the storage remains constant, regardless of the number of peers/clients. These two choices allows us to keep the monetary cost of the cloud always under control, in the presence of just one peer or with a million of them. We show the feasibility of our approach by discussing two cases studies for content distribution.

Trasportare algoritmi di partizionamento di oggetti in ambito teoria dei concetti

15 February 2011, 15:00 - Location: C-29

Annexe

Speakers:
Elvira Locuratolo
Referent:
Elvira Locuratolo

La teoria dei concetti pone in risalto la distinzione tra il livello intensionale e il livello estensionale di un concetto. Il livello intensionale, detto anche concept level, è il livello del pensiero umano mentre il livello estensionale, detto anche set-theoretical level, è il livello dell'informatica. Al livello intensionale sono modellati i concetti; al livello estensionale le classi/gli insiemi di oggetti. Poichè un insieme di oggetti cade sotto molti differenti concetti, esiste un collegamento orientato dagli aspetti intensionali agli aspetti estensionali dei concetti e non viceversa. Algoritmi per mappare grafi di classi supportate dai modelli semantici in grafi di classi supportate dai sistemi ad oggetti, nel seguito detti partitioning, sono stati definiti a livello set-theoretical per essere utilizzati in informatica e in ingegneria dell'informazione. Questi algoritmi non possono essere applicati a livello intensionale poichè tale livello riguarda i concetti e non le classi; tuttavia, essi possono essere trasportati al concept level utilizzando una metodologia appropriata. Tale metodologia consiste nell'individuare opportune restrizioni del concept level che consentono di stabilire una corretta corrispondenza tra la restrizione del concept level e il set-theoretical level del partitioning. Essa fornisce gli strumenti per:

  • definire strutture di concetti iniziali;
  • costruire gerarchie di generalizzazione di concetti corrispondenti ad algoritmi di partizionamento di grafi di classi semantiche;
  • fornire una rete di concetti che comprende:
  • tutti e soli i concetti correlati ad un concetto generale e a concetti di base tutte e sole le relazioni di inclusione intensionale tra i concetti costruiti;
  • stabilire l'aggancio tra la rete di concetti e le classi ad oggetti;
  • definire un grafo rappresentativo delle classi ad oggetti;
  • trasformare il grafo delle classi ad oggetti in un grafo rappresentativo delle classi semantiche.

L'idea di trasportare il partitioning in ambito teoria dei concetti è importante per le seguenti ragioni:

  • al concept level è disponibile un background formale consolidato, ma mancano algoritmi per costruire strutture di concetti correlate a classi di oggetti supportate da sistemi informatici;
  • l'approccio proposto rende possibile relazionare ogni classe di oggetti con tutti e soli i concetti che possono essere costruiti partendo da un universo del discorso e da concetti di base.

Possibili sviluppi futuri della ricerca presente sono delineati (Slide del seminario)

Identifying Task-based Sessions in Search Engine Query Logs

01 February 2011, 11:00 - Location: C-29

Speakers:
Gabriele Tolomei
Referent:
Fabrizio Silvestri

The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.

The Distributed Media Plays System

02 December 2010, 11:00 - Location: C-29

Speakers:
Ozgur Tamer (Norwegian University of Science and Technology )
Referent:
Roberto Scopigno

The Distributed Multimedia Plays Systems Architecture (DMP) provides 3D multiview video and 3D sound collaboration between performers over packet networks. To guarantee the end-to-end time delay less than 10-20 milliseconds, and obtain high network resource utilization, the perceived quality of audio-visual content is allowed to vary with the traffic in the network. Several parameters that are included in the quality concept, and can be controlled adaptively. To approach the natural level of human perception, the quality has to be increased to levels that temporarily require data rates of Giga bits per second between users. Actual DMP applications from arts are jazz sessions, song lessons, and distributed opera. Other applications are in forthcoming generations TV (MHP extended with DMP), games, education, and near natural-seeing virtual meetings.

A Novel Acoustic Indoor Localization System Employing

02 December 2010, 10:30 - Location: C-40

Speakers:
Mustafa Althinkaya ( Izmir Institute of Technology )
Referent:
Ercan Kuruoglu

Nowadays, outdoor location systems have been used extensively in all fields of human life from military applications to daily life. However, these systems cannot operate in indoor applications. This presentation considers an indoor location system that aims to locate an object within an accuracy of about 2 cm using ordinary and inexpensive off-the-shelf devices and that was designed and tested in an office room to evaluate its performance.

In order to compute the distance between the transducers (speakers) and object to be localized (microphone), Time-of-Arrival measurements of acoustic signals consisting of Binary Phase Shift Keying modulated Gold sequences are performed. This DS-CDMA scheme assures accurate distance measurements and provides immunity to noise and interference.

Two methods have been proposed for location estimation. The first method takes the average of four location estimates obtained by trilateration technique. In the second method, only a single robust position estimate is obtained using three distances while the least reliable fourth distance measurement is not taken into account.

The performance of the system is evaluated at positions from two height levels using system parameters determined by preliminary experiments. The precision distributions in the work area and the precision versus accuracy plots are obtained. The established system provides location estimates of better than 2 cm accuracy with 99 % precision.

Il processo di informatizzazione dell’Istituto Ambiente Marino Costiero

09 November 2010, 11:00 - Location: C-29

Referent:
Guglielmo Cresci

Nell’ambito di un complesso e articolato progetto di innovazione e trasferimento di conoscenza per le realtà produttive della Sicilia Occidentale (ICT-E3), l’IAMC ha delegato all’ISTI la progettazione e la realizzazione di un complesso di iniziative di automazione a supporto delle proprie attività scientifiche e gestionali. In questo ambito ISTI ha progettato e realizzato:

  • il portale pubblico dell’Istituto;
  • la Intranet del portale;
  • il sistema informativo gestionale integrato nella Intranet;
  • il sistema di supporto decisionale per le attività scientifiche

Il sistema, realizzato con software aperto, ha caratteristiche di modularità e scalabilità e presenta soluzioni di potenziale interesse per altri Istituti CNR. Il seminario riassume le caratteristiche salienti della realizzazione con possibili dimostrazioni di alcune funzionalità.

Feature curves in range images with application to archaeology

22 October 2010, 11:00 - Location: C-29

Speakers:
Ilan Shimshoni (Information Systems dept, University of Haifa)
Referent:
Matteo Dellepiane

Archaeological artifacts are of vital importance in archaeological research. At present there is a drive to scan these artifacts and store the scanned objects on the internet making them accessible to the whole research community. We propose a new approach for automatic processing of these 3D models. Given such an artifact our first goal is to find edges termed relief edges on the surface. The 3D curves that we defined are the 3D equivalent of Canny Edges in images. These edges can be used to illustrate the object replacing the human illustrator or at least helping him produce accurate illustrations efficiently using an interactive computerized tool.

Based on these curves we have defined a new direction field on surfaces (a normalized vector field), termed the prominent field. We demonstrate the applicability of the prominent field in two applications. The first is surface enhancement of archaeological artifacts, which helps enhance eroded features and remove scanning noise. The second is artificial coloring that can replace manual artifact illustration in archaeological reports.

Comparing fluid and mean field approximation of Markov Chains

19 October 2010, 15:00 - Location: C-29

Speakers:
Luca Bortolussi (Università di Trieste)
Referent:
Mieke Massink

We will consider two apparently different techniques for deterministic approximation of Markov Chains: fluid-flow approximation and mean field analysis. Fluid limits, or fluid-flow approximation, has received a lot of attention in recent years as a technique to approximate the evolution of stochastic process algebras, when the numerosity of all components is large. Mean field techniques, on the other hand, are often used to provide a deterministic approximation of the collective behavior of systems with many identical copies of the same object, both in discrete and continuous time. We will comment on similarities and differences, discussing if the fluid limit and the mean field approximation for a PEPA model (without synchronization) are the same, and if and how the theory of mean field approximation can be useful for the analysis of Stochastic Process Algebra models.

Construction of Concepts and Decomposition of Objects

19 October 2010, 11:00 - Location: C-29

Speakers:
Elvira Locuratolo, Jari Palomaki (Tampere University of Technology/Pori)
Referent:
Elvira Locuratolo

The seminar aims to present the research results achieved in transporting to the concept theory algorithms for mapping graphs of classes supported by semantic data models to graphs of classes supported by object systems. The concept theory comprises the distinction between the intensional and extensional aspects of concepts. The former is referred to the information contents of concepts, whereas the latter is referred to the sets of objects which fall under the concepts. These two aspects of concepts define two different levels of representation, called intensional/concept level and extensional/set-theoretical level, respectively. It is correct to go from the concept level to the set-theoretical level, but not vice versa.

The partitioning algorithms employed in computer science and information engineering have been defined at the set-theoretical level.

These algorithms cannot be applied to the concept level; however, they can be transported to this level. Really, our approach, which is based on the construction of concepts and on the decomposition of objects, allows:

  • Defining the process of concept construction.
  • Providing a network of concepts enclosing:
    • All and only the concepts related with a Universe of Discourse and basic concepts
    • All and only the intensional inclusion relations.
  • Recognizing the concepts that can be mapped to the extensional level.
  • Organizing the set of classes at the extensional level into a graph of object classes.
  • Transforming it into a graph of semantic classes.

Differently from other approaches, which can be found in information modeling and knowledge bases, and in formal context analysis, our approach makes it possible to relate each extension with all and only the concepts which can be constructed starting from a Universe of Discourse and basic concepts. Possible application fields of our approach are outlined.

Discrete Physics and Computation

18 October 2010, 11:00 - Location: C-29

Speakers:
Alexander Lamb (Freelance)
Referent:
Tommaso Bolognesi

As more and more theories in particle physics begin to explore the possibility of discrete space-time, the smaller the gap becomes between physics and computer science. Can computer science provide useful tools and algorithms to help facilitate the search for the ‘theory of everything?’. This talk argues the case, and shows how simple algorithms can be used to reproduce many critically important symmetries observed in nature.

Starting with a simple demonstration of linear motion and rotational invariance, this talk will cover work that duplicates key physical phenomena, such as special relativity and quantum interference. The current project here at CNR on models for discrete space-time will also be outlined and preliminary results shared. This talk requires no formal training in physics, but is suitable for anyone interested in taking their knowledge of computation and using it to explore open topics such as quantum gravity.

Simulating Aging, Weathering and Surface Details

13 October 2010, 10:00 - Location: I-53

Speakers:
Carles Bosch (REVES/INRIA Sophia Antipolis, France)
Referent:
Roberto Scopigno

Accurately modeling material appearance and surface details is essential for realistic image synthesis. In this talk, I will describe different solutions we have been working in this direction. The first part will be centered on simulating aging and weathering phenomena, with a special interest on scratches and related features, and the use of images to guide weathering simulation in general. We will then overview some approaches for procedural modeling of surface details, where our recent work on Gabor Noise-based detail representation will be introduced. Future directions and other related projects will also be described.

Semantic and abstraction content of art images

29 September 2010, 11:00 - Location: C-29

Speakers:
Peter Stanchev (Kettering University, Flint, MI, USA)
Referent:
Fausto Rabitti

A key characteristic of visual arts’ objects is that they are created by a cognitive process. The artwork is not merely an objective presentation, but also communicates one or more subjective messages, "delivered" from the creator to the observer. Every touch to the artwork helps to build bridges between cultures and times.

Since its first publication in 1962 Janson's History of Art is one of the most valuable sources of information spanning the spectrum of Western art history from the Stone Age to the 20th century. It became a prominent introduction to art for children and a reference tool for adults trying to recall the identity of some familiar image. The colorful design and numerous illustrations of exceptional quality are far from being a means for providing dry information; they also contribute to experience a deep emotional fulfillment.

But now online search engines have whetted web surfers' appetites for context and information, there are a host of digital databases and repositories offering easy access to digital items, presenting the colorfulness of art history as well as to connected metadata, giving all additional information from pure technical details, connected with the way of creating the artifacts, to deep personal details from the life of the creators, which help the observers to understand input message in the masterpieces.

The digital repositories of cultural heritage objects can employ similar techniques as generic ones in order to provide standard functionality for searching objects. The cultural heritage objects are rich in content, describing events, monuments, places, people; they are distributed across different locations. The users can formulate queries using different modalities such as free text, similarity matching, or metadata. The specifics of observed objects profile some additional tasks, which are interesting to be observed. In area of art paintings retrieval the sensitive, semantic and aesthetic gaps are quite actual problem. In the current years, very actual stands the problems of semantic, syntactic and profile interoperability and constructing reference layers; additional area to explore is linked data which allow to contextualize objects in the cultural heritage domain.

In this seminar a succinct overview of the development of repositories of digital art images and then highlighted the specialized search methods in this domain are presented. Compared to other cultural heritage materials, to improve the accessibility of digitized art images, a transition in the methods is needed from approaches involving only textual metadata towards "hybrid" approaches of retrieval using content based image retrieval jointly with the metadata.

The seminar is organized in the following way. First a brief overview of the main directions of presentation and analysis of artworks is given. An attempt for describing the taxonomy of art image content is proposed. Our approach for extracting attributes, concerning different aspects of image content in order to receive discriminating profiles for describing abstraction specifics of artists, schools or movements is presented. A description of the program realization and conducted experiments over a dataset, which contains artworks of Western and contemporary Israel artists is analyzed. Finally some conclusions and future research directions are highlighted.

Learning from Pairwise Constraints by Similarity Neural Networks

21 July 2010, 11:00 - Location: C-29

Speakers:
Lorenzo Sarti (Dipartimento di Ingegneria dell'Informazione - Università di Siena)
Referent:
Fosca Giannotti

We present Similarity Neural Networks (SNNs), a neural network model able to learn a similarity measure for pairs of patterns, exploiting a binary supervision on their similarity/dissimilarity relationships. Pairwise relationships, referred to also as pairwise constraints, generally contain less information than class labels, but, in some contexts, are easier to obtain from human supervisors.

The SNN architecture guarantees the basic properties of a similarity measure (symmetry and non negativity) and, differently from the majority of the metric learning algorithms proposed so far, it can model non-linear relationships among data and can deal with non-transitivity of the similarity criterion. The theoretical properties of SNNs and their application to Semi­Supervised Clustering will be presented.

In particular, we introduce a novel technique that allows the clustering algorithm to compute the optimal representatives of a data partition by means of Backpropagation on the input layer, biased by a L2 norm regularizer. An extensive set of experimental results will be reported, in order to compare SNNs with state-of-the art similarity learning algorithms. Both on benchmarks and real world data, SNNs and SNN-based clustering show improved performances, assessing the advantage of the proposed neural network approach to similarity measure learning.

Anisotropic Quadrangulation

16 July 2010, 11:00 - Location: I-06

Speakers:
Denis Zorin (New York University)
Referent:
Nico Pietroni

Quadrangulation methods aim to approximate surfaces by semi-regular meshes with as few extraordinary vertices as possible. A number of techniques employ the harmonic parameterization to control angle distortion, or Poisson-based techniques to align to prescribed features. However, both techniques create near-isotropic quads. To keep the approximation error low, various approaches have been suggested to align the isotropic quads with principal curvature directions.

A different promising way is to allow for anisotropic elements, which are well- known to have significantly better approximation quality. In this work we present a simple and efficient technique to add curvature-dependent anisotropy to harmonic and Poisson parameterization and improve the approximation error of the quadrangulations. We use a metric derived from the shape operator which results in a more uniform error distribution, decreasing the error near features.

Il contesto evolutivo del mondo GIS Open Source

14 July 2010, 15:00 - Location: C-29

Speakers:
Claudio Schifani

Negli ultimi anni si è assistito ad un crescente interesse della comunità scientifica allo sviluppo di soluzioni GIS free e open source. Tale percorso ha delineato uno scenario in cui un utente GIS è in grado di scegliere tra molteplici ambienti GIS per eseguire analisi sia nel mondo vector che nel mondo raster. Parallelamente si sta assistendo al diffondersi di tecnologie Web dotate di interfacce utenti per la pubblicazione dei dati geografici con standard definiti dall'Open Geospatial Consortium, oltre che per l'erogazione di servizi di analisi spaziali basati sullo standard Web Processing Service. Sarà presentata una sintesi del contesto evolutivo delle soluzioni GIS disponibili oggi sia in ambiente desktop che web ed in relazione al tipo di utenza e potenzialità di analisi vector e raster.

Efficient and accurate algorithms for deforming surfaces

12 July 2010, 11:00 - Location: C-29

Speakers:
Denis Zorin , Hijung Valentina Shin (Massachusetts Institute of Technology)
Referent:
Nico Pietroni

Many engineering and computer graphics applications require computing surface deformations minimizing an energy or solving an equation of motion. This type of deformations are used to model free-form surfaces in computer-aided design systems, to create animated characters, to simulate cloth or analyze stresses in a car body.

Complex surfaces are commonly represented by meshes, that is, piecewise-linear functions which cannot be differentiated directly. At the same time, the equations that we need to solve often involved derivatives of order four or higher. Approximating high-order derivatives on meshes with sufficient accuracy is difficult, and often requires costly computations. These computations may be prohibitively expensive in the context of interactive modeling and simulation. In many cases, cheap, but inaccurate approximations are available, resulting in faster algorithms, but less reliable results.

In this talk, I will discuss how mesh deformations can be computed efficiently while maintaining accuracy, and demonstrate several applications in geometric modeling and animation.

I will review several complimentary approaches that we have explored, in particular, taking advantage of geometric relations to simplify the equations we need to solve, decomposing higher-order problems into several low-order problems, and representing the solution of a general problem as a combination of solutions of special-case simpler problems.

An approach to Content-Based Image Retrieval based on the Lucene search engine library

06 July 2010, 11:00 - Location: C-29

Speakers:
Claudio Gennaro
Referent:
Claudio Gennaro

Content-based image retrieval is becoming a popular way for searching the web as the amount of available multimedia data increases. However, the cost of developing from scratch a robust and reliable system with content-based image retrieval facilities is quite prohibitive.

In the talk I will present an extremely simple algorithm that converts low level image features (such as colors and textures) into a textual form, which allows us to use Lucene's off-the-shelf indexing and searching abilities with a little implementation effort. In this way, we are able to set up a robust information retrieval system that combines full-text search with content-based image retrieval capabilities. This idea will be demonstrated by an image content-based retrieval over a dataset of 106 million images indexed by five MPEG-7 descriptors from Cophir.

Reconstruction of N-way Arrays by Partial Sampling

29 June 2010, 15:00 - Location: C-29

Speakers:
Cesar Federico Caiafa (LABSP-Brain Science Institute, RIKEN, Japan)
Referent:
Emanuele Salerno

In modern signal processing, data mining and other applications, we are frequently faced to the problem of dealing with huge-size datasets (large-scale problems). Moreover, datasets usually have a multi-way (N-way) array structure where each dimension (mode) has a particular physical meaning such as time, frequency, space, etc. N-way arrays (tensors) generalize the mathematical concept of vectors and matrices to higher dimensions (N>2) and have some attractive properties that make them useful for representing and processing N-dimensional data by decomposing them in factors. While many advances were achieved on algorithms for tensor decomposition during the last decade, nowadays one of the main challenges is to keep the computational load as low as possible achieving fast algorithms and using limited memory resources.

In this talk, an introduction to multi-way decomposition tools will be presented with focus on 'Tucker models', one of the possible generalization to higher dimensions of the useful Singular Value Decomposition (SVD) of matrices. We will present some recent results and a new algorithm, namely the Fiber Sampling Tensor Decomposition (FSTD) algorithm [1], that allows one to reconstruct an N-way array (with arbitrary N>2) from a reduced subset of its entries. As a generalization of the classical column-row matrix decomposition (also known as CUR or 'skeleton' decomposition), which approximates a matrix from a subset of its rows and columns, our result provides a new method for the approximation of a high dimensional tensor by using only the information contained in a subset of its 'n-mode fibers' (n=1,2,..,N). The properties of this method will be analyzed and discussed. We will also discuss about its potential applications for signal processing for cases where datasets are highly redundant, i.e. they can be can be compressed by using a low rank tensor decomposition such as the case of the 'Tucker model'. Simulation results will be shown to illustrate the properties and the potential of this method.

comments to webmaster