Grants for Young Mobility  All ...

The ISTI Grants for Young Mobility (GYM) program enables young researchers (below 34) to carry out research in cooperation with foreign Universities and Research Institutions of clear international standing. It complements similar CNR programs.



Ferrari Alessio

Ferrari Alessio home page

Moreo Fernandez Alejandro
Monteiro De Lira Vinicius
Danielis Alessandro
Mannocci Andrea

Mannocci Andrea home page


Coletto Mauro

Coletto Mauro home page

Ferreira de Miranda Breno
Vadicamo Lucia

Vadicamo Lucia home page

Coletto Mauro

Coletto Mauro home page

Ferrari Alessio

Ferrari Alessio home page

Trani Salvatore

Trani Salvatore home page

Vairo Claudio

Vairo Claudio home page


Bacco Felice

Bacco Felice home page

In the field of satellite access networks, the need for optimization studies in hybrid access schemes with interference cancellation is a trend topic. Studying and evaluating the suitability of either a random (RA) or a dedicated access (DA) in a hybrid scheme, like in DVB-RCS2, is the core part of the planned activity. A hybrid scheme can optimize the allocation of resources, ensuring an efficient use. In case of elastic traffic and/or variable-bit-rate (VBR) data flows, the main objective is identifying a class of metrics able to drive the switch from a random to a dedicated access (and vice versa).
The activity will be focused on the study of the implementation of random access protocols available at the German Aerospace Center (Deutsches Zentrum für Luft - und Raumfahrt DLR - Institut für Kommunikation und Navigation Satellitennetze - Weßling, Germany), under the supervision of Prof. Tomaso De Cola, with the objective of fully understand, eventually extend and use it for specific simulations test-beds. Also classification algorithms may be part of the activity, to classify input traffic and then to decide whether transmitting data following a RA or a DA protocol. Higher layer and lower layer classification algorithms are crucial to ensure the optimal utilization of the available bandwidth by reducing the dramatic fall of the throughput when the wrong transmission scheme is selected.

Ceccarelli Diego

Ceccarelli Diego home page

The largest part of Web documents currently does not contain semantic annotations, and they are commonly modeled as simple bags of words. One of the main approaches for enhancing search effectiveness on these documents consists in automatically enriching them with the most relevant related entities. Given a plain text, the Entity Linking task consists in identifying small fragments of text (in the following interchangeably called spots or mentions) referring to any entity (represented by a URI) that is listed in a given knowledge base. Usually the task is performed in three steps: i) Spotting: a set of candidate mentions is detected in the input document, and for each mention a list of candidate entities is retrieved; ii) Disambiguation: for each spot associated with more than one candidate, a single entity is selected to be linked to the spot; iii) Ranking: the list of entities detected is ranked according to some policy, e.g., annotation confidence. Our proposal is to develop a method for detecting important entities mentioned in a document. We refer to this as to the Entity Ranking problem.
We plan to use an existing or a new text collection for creating a benchmark for Entity Ranking by manually labeling the entities mentioned in a given document with an relevance score. The resulting datasets will be available to the research community. Then, we will design an improved algorithm able to link documents to entities, and to rank the entities according to their importance.

Marcheggiani Diego

Marcheggiani Diego home page

One of the main problems in the world of machine learning is the shortage of annotated training data. Annotating data is an expensive process where a lot of human effort is employed. This activity is even more expensive when we need training data for structured learning problems, where an instance is composed by several annotations, such as in the sequence labeling problems. The activity that I am going to do will be study new techniques aimed to minimize the human effort involved in the annotation of training data for sequence labeling problems. This new techniques will merge together strategies of active learning, semi-supervised learning and transfer learning in the domain of learning algorithms for sequence labeling, such as Conditional Random Fields and Hidden Markov Models.
This research activity will be carried out at the Université Pierre et Marie Curie, Laboratoire d'Informatique Paris 6 (LIP6), in Paris, under the supervision of Professor Thierry Artières.

Reggiannini Marco

Reggiannini Marco home page

In the field of oceanographic engineering, the demand for machine learning integration as a support to the design and development of autonomous underwater vehicles (AUVs) is rapidly increasing. The main goal of this work is to provide methods to be exploited for optimal surveys of the marine environment. It will focus on multi-sensor (namely optical and acoustic) data capturing and processing methods specifically devised for underwater scene understanding purposes. The research activity will be carried out at the Ocean Systems Lab (OSL), a leading center of excellence in Underwater Robotics and Underwater Signal and Image Processing, at the Heriot-Watt University (Edinburgh, Scotland), under the supervision of Prof. David M. Lane. Results concerning the analysis of data provided by multi-sensor platforms in underwater environments are expected. Methods for processing acoustic and optical data aiming at large scale map creation, pattern recognition, 3D reconstruction and algorithm/data fusion will be explored and pursued.

Malomo Luigi

Malomo Luigi home page

Palma Gianpaolo

Palma Gianpaolo home page

The current acquisition pipeline for visual models of 3D worlds is based on a paradigm of planning a goal-oriented acquisition. The digital model of an artifact (an object, a building, up to an entire city) is produced by planning a specific scanning campaign, carefully selecting the acquisition devices, performing the on-site acquisition at the required resolution and then post-processing the acquired data to produce a beautified triangulated and textured model. However, in the future we will be faced with the ubiquitous availability of sensing devices; these devices deliver different types of data streams (for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.) that need to be processed and displayed in a new way.
The availability of this huge amount of data requires a change in the acquisition and processing technology: instead of a goal-driven acquisition that determines the devices and sensors, the acquisition process is determined by the sensors and resulting available data. One of the main challenge is the developing of the ultra-fast geometry processing operators applied on the fly on the continuous 3D data input streams. It requires two tasks. The first one is the design of basic geometric operators to achieve low computational and memory complexity and fine-grained parallel execution, exploiting recently introduced adaptive GPU geometry- related stages (e.g., tessellator unit, compute shaders, free format intershader communication) and hybrid CPU-GPU runtime kernels. The second task is the use of these basic operator to develop new algorithms for the geometry processing methods, allowing to filter data, resample it to a chosen resolution and equip it with a surface structure using an as small as possible amount of time and memory.
The research activity will be carried out at the institute Telecom Paris Tech in Paris under the supervision of the Prof. Tamy Boubekeur.

Pappalardo Luca

Pappalardo Luca home page

Researchers recently discovered that traditional mobility models adapted from the observation of particles or animals (such as Brownian motion and Levy-flights), and recently from the observation of dollar bills, are not suitable to describe people's movements. Indeed, at a global scale humans are characterized by a huge heterogeneity in the way they move, since a Pareto-like curve was observed in the distribution of the characteristic distance traveled by users, the so called radius of gyration.
Although these discoveries have doubtless shed light on interesting and fascinating aspects about human mobility, the origin of the observed patterns still remains unclear: Why do we move so differently? What are the factors that shape our mobility? What are the movements or the locations that mainly determine the mobility of an individual? Can we divide the population into categories of users with similar mobility characteristics? The above issues need to be addressed, if we want to understand key aspects about the complexity behind our society. In the current work, we exploit the access to two big mobility datasets to present a data-driven study aimed to detect and understand the main features that shape the mobility of individuals.
By using a dualistic approach that combines strengths and weakness of network science and data mining, we showed the mobility of a significant portion of population to be characterized by the patterns of commuting between different and distant mobility "hearts".

comments to webmaster