Grants for Young Mobility  All ...

The ISTI Grants for Young Mobility (GYM) program enables young researchers (below 34) to carry out research in cooperation with foreign Universities and Research Institutions of clear international standing. It complements similar CNR programs.



Carrara Fabio

Carrara Fabio home page

The proposed research activity will be carried out in the Data Intensive Systems and Applications (DISA) laboratory of the Masaryk University, located in Brno, Czech Republic. The DISA laboratory is directed by Prof. Pavel Zezula, whose research activity has achieved significant results in the field of Similarity Search, Image Retrieval and Database Systems.
This project aims to model the visual space of deep features using generative models, in order to be able to generate new samples that can be used to query CBIR systems and explore their content. Novel neural network architectures, such as Variational Autoencoders and Generative Adversarial Networks , have been successfully adopted to generate high quality samples of natural images, and we plan to investigate their application to the generation of deep features conditioned on the particular needs of the user.

Catena Matteo

Catena Matteo home page

In Web Search Engines (WSEs), a query can be rewritten in several ways to enhance effectiveness at the cost of a longer processing time, for example by applying term proximity or expansions. Selective Query Rewriting (SQR) predicts the processing times of possible rewritings to decide – on a per-query basis – which is the most effective one that can be executed in the allowed amount of time. To process rewritten queries, search engines can resort on ephemeral posting lists generated on-the-fly from an inverted index. Hence, SQR cannot be used with state-of- the-art query processing techniques, such as BlockMaxWand (BMW). To boost efficiency, BMW leverages posting lists' properties that are precomputed (e.g., upper scores). However, such information is unavailable for ephemeral posting lists.
The visit, which will take place at the School of Computing Science, University of Glasgow, under the responsibility of Dr. Craig Macdonald. The project is to reconcile SQR with BMW by exploiting dedicated approximations of upper scores or analytical estimation of posting lists' cardinalities.

Miliou Ioanna

Miliou Ioanna home page

Big Data offer the capability of creating a digital nervous system of our society, enabling the monitoring and prediction of various phenomena in quasi real time. But with that, comes the need of nowcasting changes and events in nearly real-time as well. Choi and Varian introduced the term nowcasting to advocate the tendency of web searches to correlate with various indicators, which may reveal helpful for short term prediction. In the field of epidemiology, it was showed that search data from Google Flu Trends, could help predict the incidence of influenza-like illnesses(ILI). But as Lazer and al. notice, in February 2013, Google Flu Trends predicted more than double the proportion of doctor visits for ILI than the Center for Disease Control.
The visit will take place at the Laboratory for the Modeling of Biological + Socio-technical Systems (Mobs Lab), Northeastern University. The project aims to establish a new approach to nowcast influenza by correlating sales time series with influenza data. During the collaboration with the Mobs Lab group, we aim in introducing an additional layer of information that derives from epidemiological models that could help us to describe better and to nowcast the evolution of the influenza spread and its peak.

Sabbadin Manuele

Sabbadin Manuele home page

AR/VR applications got many attentions in recent years due to the development of new devices such as Microsoft Hololens (AR) and HTC Vive (VR). Even the mobile market has moved its attention into AR applications (ARKit, ...). The main goal of any AR application is to visualize or even interact with virtual objects inserted in the real surrounding world. In order to guarantee a correct and realistic visualization of the virtual object, a fundamental step is the computation of the amount of light that reaches each object point from the environment, simulating effects like indirect lighting, shadows and color bleeding. Imagine to have a white object inside your AR application and to put it near a red wall: even if the 3D model of the object is highly accurate the entire experience gets worse if it doesn’t get any shade of red. In the classical computer graphic field, several algorithms to compute the global illumination of a scene were proposed (Ray-tracing, Radiosity, PBGI, ...), but most of them are not real-time.
The visit will take place at the Telecom ParisTech under the responsibility of Prof. Tamy Boubekeur. The project aims to make a real-time version of the Point Based Global Illumination (PBGI) algorithm and to use it in a AR/VR scenario, starting from a raw acquisition of the environment.


Ferrari Alessio

Ferrari Alessio home page

Customer-analyst interviews are considered among the most effective means to elicit requirements. Previous work on interviews studied the effect of specific variables on the success of interviews, as, e.g., domain knowledge, cognitive strategies, analyst’s experience. Recent work of the applicant studied the phenomenon of ambiguity in interviews, and showed that, when detected, ambiguity can help the analyst in disclosing the tacit knowledge of the customer. The project aims at laying the basis to increase the number of detected ambiguities. Based on the idea that ambiguities are perceived in different ways by different subjects, we investigate to which extent ambiguities identified by a listener (subject who listens to the interview audio) differ from those detected by the analyst. Ambiguities of the listener can be used to identify new questions to scope the (possibly tacit) knowledge of the customer. The project includes an experiment in which a set of customer-analyst interviews is performed with students of UTS by means of role-playing. Students will also play the role of listeners, and ambiguities detected by analysts and listeners will be compared.
The project will be carried out at University of Technology Sydney (UTS), Faculty of Engineering and Information Technology, Requirements Engineering Research Laboratory focuses on Requirements Engineering (RE) research and training. Prof. Didar Zowghi, head of the laboratory, carries out research in empirical RE, and has a large experience in performing role-playing experiments with students, as the one proposed in this project

Moreo Fernandez Alejandro

Moreo Fernandez Alejandro home page

This project focuses on the Random Indexing (RI) DSM method. RI has a strong theoretical support and, despite being a context-counting method, it scales linearly to arbitrarily large datasets. More interestingly, has recently echoed the relations between RI and (Convolutional) NN models. Since NN are the current state of the art in large-scale semantic processing of text data, we believe further investigating the relations between RI and NN is a key point to better understand NN-based DSM.
The project will be carried out at the Swedish Institute of Computer Science (SICS), an independent non-profit research organization with a research focus on applied computer science. The institute conducts research in several areas, including big data analytics, machine learning and optimization, large scale network based applications, and human-computer interaction.

Monteiro De Lira Vinicius

Monteiro De Lira Vinicius home page

In this project, we intend to develop efficient and scalable solutions for real-time smart mobility systems by using information retrieval (IR) techniques for the management and query of spatio-temporal data. Typical examples of these kinds of queries include spatial predicates such as “covers” (e.g., “find metro stations in Paris”) or “intersects” (e.g., “find possible carpool rides”). The research will be carried out at the Terrier group at the University of Glasgow, a leading group in the areas of Information Retrieval and Smart Cities. The professors Craig Macdonals and Iadh Ounis have an excellent record of high-level publications in top conferences in these fields.

Danielis Alessandro

Danielis Alessandro home page

The proposal is to improve an imaging (hardware/software) technique for the generation of images from data acquired by a HSI system, which is an integrated part of the Wize Mirror, the smart imaging prototype developed in EU-FP7 SEMEOTICONS project. This prototype is able to acquire and process multimedia data about a person’s face and translate these into measurements and descriptors automatically evaluated by computerized applications, providing an evaluation of an individual’s wellbeing conditions with respect to cardio-metabolic risk factors. The research will be carried out at the Department of Biomedical Engineering, Linköping University (LiU), Sweden, which research is carried out within the fields of biomedical optics, signal and image processing, ultrasound and bio-acoustics, modelling and simulation in physiology, neuro-engineering and knowledge-based decision support systems.

Mannocci Andrea

Mannocci Andrea home page

Aggregative data infrastructures consist in large ecosystems of data collection and processing services, possibly availing of advanced tools and workflows, used to build complex graphs of information in order to capture at best the scientific output of a given research community. The resulting data is an important asset for the community which demands for guarantees on its “correctness" and “quality" over time. However, the lack of assurances from aggregated data sources, the occurrence at any level of unexpected errors of any sort and the ever-changing nature of such infrastructures (in terms of algorithms and workflows) makes the presence of a monitoring strategy vital for delivering high quality results to end-users. For this purpose, the applicant’s PhD focused on the design of DataQ, a system providing a flexible way to outline monitoring scenarios, gather metrics from data flows and detect anomalies. This collaboration is intended for validating DataQ approach in CORE real-world application. The research will be carried out at the Knowledge Media Institute (KMi) of the Open University, based in Milton Keynes UK, which researches and develops solutions in the field of Cognitive and Learning Sciences, Artificial Intelligence and Semantic Technologies, and Multimedia. The advisor for the proposed collaboration is PhD Petr Knoth, founder and manager of CORE2, one of the most prominent European data infrastructures aggregating over 25 million records from about a thousand Open Access Repositories.


Coletto Mauro

Coletto Mauro home page

Ferreira de Miranda Breno
Vadicamo Lucia

Vadicamo Lucia home page

Coletto Mauro

Coletto Mauro home page

Ferrari Alessio

Ferrari Alessio home page

Trani Salvatore

Trani Salvatore home page

Vairo Claudio

Vairo Claudio home page


Bacco Felice

Bacco Felice home page

In the field of satellite access networks, the need for optimization studies in hybrid access schemes with interference cancellation is a trend topic. Studying and evaluating the suitability of either a random (RA) or a dedicated access (DA) in a hybrid scheme, like in DVB-RCS2, is the core part of the planned activity. A hybrid scheme can optimize the allocation of resources, ensuring an efficient use. In case of elastic traffic and/or variable-bit-rate (VBR) data flows, the main objective is identifying a class of metrics able to drive the switch from a random to a dedicated access (and vice versa).
The activity will be focused on the study of the implementation of random access protocols available at the German Aerospace Center (Deutsches Zentrum für Luft - und Raumfahrt DLR - Institut für Kommunikation und Navigation Satellitennetze - Weßling, Germany), under the supervision of Prof. Tomaso De Cola, with the objective of fully understand, eventually extend and use it for specific simulations test-beds. Also classification algorithms may be part of the activity, to classify input traffic and then to decide whether transmitting data following a RA or a DA protocol. Higher layer and lower layer classification algorithms are crucial to ensure the optimal utilization of the available bandwidth by reducing the dramatic fall of the throughput when the wrong transmission scheme is selected.

Ceccarelli Diego

Ceccarelli Diego home page

The largest part of Web documents currently does not contain semantic annotations, and they are commonly modeled as simple bags of words. One of the main approaches for enhancing search effectiveness on these documents consists in automatically enriching them with the most relevant related entities. Given a plain text, the Entity Linking task consists in identifying small fragments of text (in the following interchangeably called spots or mentions) referring to any entity (represented by a URI) that is listed in a given knowledge base. Usually the task is performed in three steps: i) Spotting: a set of candidate mentions is detected in the input document, and for each mention a list of candidate entities is retrieved; ii) Disambiguation: for each spot associated with more than one candidate, a single entity is selected to be linked to the spot; iii) Ranking: the list of entities detected is ranked according to some policy, e.g., annotation confidence. Our proposal is to develop a method for detecting important entities mentioned in a document. We refer to this as to the Entity Ranking problem.
We plan to use an existing or a new text collection for creating a benchmark for Entity Ranking by manually labeling the entities mentioned in a given document with an relevance score. The resulting datasets will be available to the research community. Then, we will design an improved algorithm able to link documents to entities, and to rank the entities according to their importance.

Marcheggiani Diego

Marcheggiani Diego home page

One of the main problems in the world of machine learning is the shortage of annotated training data. Annotating data is an expensive process where a lot of human effort is employed. This activity is even more expensive when we need training data for structured learning problems, where an instance is composed by several annotations, such as in the sequence labeling problems. The activity that I am going to do will be study new techniques aimed to minimize the human effort involved in the annotation of training data for sequence labeling problems. This new techniques will merge together strategies of active learning, semi-supervised learning and transfer learning in the domain of learning algorithms for sequence labeling, such as Conditional Random Fields and Hidden Markov Models.
This research activity will be carried out at the Université Pierre et Marie Curie, Laboratoire d'Informatique Paris 6 (LIP6), in Paris, under the supervision of Professor Thierry Artières.

Reggiannini Marco

Reggiannini Marco home page

In the field of oceanographic engineering, the demand for machine learning integration as a support to the design and development of autonomous underwater vehicles (AUVs) is rapidly increasing. The main goal of this work is to provide methods to be exploited for optimal surveys of the marine environment. It will focus on multi-sensor (namely optical and acoustic) data capturing and processing methods specifically devised for underwater scene understanding purposes. The research activity will be carried out at the Ocean Systems Lab (OSL), a leading center of excellence in Underwater Robotics and Underwater Signal and Image Processing, at the Heriot-Watt University (Edinburgh, Scotland), under the supervision of Prof. David M. Lane. Results concerning the analysis of data provided by multi-sensor platforms in underwater environments are expected. Methods for processing acoustic and optical data aiming at large scale map creation, pattern recognition, 3D reconstruction and algorithm/data fusion will be explored and pursued.

Malomo Luigi

Malomo Luigi home page

Palma Gianpaolo

Palma Gianpaolo home page

The current acquisition pipeline for visual models of 3D worlds is based on a paradigm of planning a goal-oriented acquisition. The digital model of an artifact (an object, a building, up to an entire city) is produced by planning a specific scanning campaign, carefully selecting the acquisition devices, performing the on-site acquisition at the required resolution and then post-processing the acquired data to produce a beautified triangulated and textured model. However, in the future we will be faced with the ubiquitous availability of sensing devices; these devices deliver different types of data streams (for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.) that need to be processed and displayed in a new way.
The availability of this huge amount of data requires a change in the acquisition and processing technology: instead of a goal-driven acquisition that determines the devices and sensors, the acquisition process is determined by the sensors and resulting available data. One of the main challenge is the developing of the ultra-fast geometry processing operators applied on the fly on the continuous 3D data input streams. It requires two tasks. The first one is the design of basic geometric operators to achieve low computational and memory complexity and fine-grained parallel execution, exploiting recently introduced adaptive GPU geometry- related stages (e.g., tessellator unit, compute shaders, free format intershader communication) and hybrid CPU-GPU runtime kernels. The second task is the use of these basic operator to develop new algorithms for the geometry processing methods, allowing to filter data, resample it to a chosen resolution and equip it with a surface structure using an as small as possible amount of time and memory.
The research activity will be carried out at the institute Telecom Paris Tech in Paris under the supervision of the Prof. Tamy Boubekeur.

Pappalardo Luca

Pappalardo Luca home page

Researchers recently discovered that traditional mobility models adapted from the observation of particles or animals (such as Brownian motion and Levy-flights), and recently from the observation of dollar bills, are not suitable to describe people's movements. Indeed, at a global scale humans are characterized by a huge heterogeneity in the way they move, since a Pareto-like curve was observed in the distribution of the characteristic distance traveled by users, the so called radius of gyration.
Although these discoveries have doubtless shed light on interesting and fascinating aspects about human mobility, the origin of the observed patterns still remains unclear: Why do we move so differently? What are the factors that shape our mobility? What are the movements or the locations that mainly determine the mobility of an individual? Can we divide the population into categories of users with similar mobility characteristics? The above issues need to be addressed, if we want to understand key aspects about the complexity behind our society. In the current work, we exploit the access to two big mobility datasets to present a data-driven study aimed to detect and understand the main features that shape the mobility of individuals.
By using a dualistic approach that combines strengths and weakness of network science and data mining, we showed the mobility of a significant portion of population to be characterized by the patterns of commuting between different and distant mobility "hearts".

comments to webmaster