Grants for Young Mobility

The ISTI Grants for Young Mobility (GYM) program enables young researchers (below 34) to carry out research in cooperation with foreign Universities and Research Institutions of clear international standing. It complements similar CNR programs.

Call 2024

Recipients 2024

First Call - April 2024

Recipients 2023

First Call

Second Call

Recipients 2022

First Call - May 2022

Second Call - October 2022

 

Recipients 2019

Recipients 2018

  • In recent years, various location-related signals and the corresponding techniques have been studied to satisfy the increasing demand of indoor location-based service. Such signals include ultra-wideband, radio frequency identification, echo, Wi-Fi, and magnetic fields. State-of-the-art solutions design separate algorithms to process different indoor signals. Outputs of these algorithms are generally used as inputs of data fusion strategies. Combining the experiences of the INIT, especially in Neural Networks, Pattern Recognition, and Machine Learning, to my expertise in Real-time Indoor Positioning system (IPS), the project will investigate the impact of the Deep Learning (DL) strategies in order to leverage the drawbacks of the state-of-the-art in indoor positioning application. The lack of a standard technology for the IPSs claims further investigation. The fast-growing dissemination of deep learning algorithms, especially in computer vision scenario, enables several enhancements also for IPSs.
  • Persuasive technology," is any interactive computing system designed to change people's attitudes and behaviours." [Fogg, 2002] These technologies aim to modify user attitudes, intentions, or behavior through computer-human dialogue and social influence. A Behavioral Model aims to model factors that can be used to explain behavior changes in an individual. In our work, the persuasion process comprises understanding current behaviours, detecting when the older adult is deviating from the intended target, and, if so, deciding the best course of action to nudge the older adult towards the target behaviour. For this to be possible, the Persuasion Module must model the older adult behaviour using an existing Behavioral Model and must apply one or more Behavioral Changing Techniques to promote changes.
  • Video sampling techniques represent a valuable resource for rapid biological underwater surveys; however, the visual analysis of a great number of images requires a large amount of time spent by experienced personnel. The automation of the procedures for organism identification is therefore a fundamental step in a long-term monitoring program. Machine learning techniques are able to automatically perform annotation of images in a short time, but training a neural network in order to recognize a specific marine organism requires the preparation of a large training dataset and the recognition of a single individual could be not easy to generalize. Furthermore, commonly used labelling methodology for classification of underwater images involves manual annotation performed directly on photos, point-by-point, in a very time-consuming process. We are working on a supervised learning methods that take as input data high resolution ortho-mosaics of seabed and returns the semantic segmentation map of the investigated specimens. The use of seabed maps leads to faster labelling time, less generalization problem and help to solve class imbalance as it allows choosing in a more effective and easy way data that better represent the classes.
  • Nonlinear eigenvalue problems are becoming increasingly important as a modeling tool in applied mathematics and engineering. However, in contrast to what happens with the linear case, the algorithms and the theory to attack them is still relatively young and under current development. Nowadays, the most widely used techniques to solve these problems is to recast them in the form of a linear eigenvalue problem of (much) larger size, and retrieve the solution by an appropriate projection. In this task, it is essential to develop an accurate perturbation theory, to be able to understand the effects of roundoff errors and noise in the data on the final output. Recently, it has been noticed that the errors in the solution of linearized polynomial eigenvalue problem are structured, and this allows a refined backward error analysis. This already led to important contributions, such as the first "optimal complexity" matrix polynomial eigensolver with proven normwise backward stability. Our task will be to extends this theory further, and at the same time develop reliable algorithms for the solution of these problems.

Recipients 2017

  • The proposed research activity will be carried out in the Data Intensive Systems and Applications (DISA) laboratory of the Masaryk University, located in Brno, Czech Republic. The DISA laboratory is directed by Prof. Pavel Zezula, whose research activity has achieved significant results in the field of Similarity Search, Image Retrieval and Database Systems.This project aims to model the visual space of deep features using generative models, in order to be able to generate new samples that can be used to query CBIR systems and explore their content. Novel neural network architectures, such as Variational Autoencoders and Generative Adversarial Networks , have been successfully adopted to generate high quality samples of natural images, and we plan to investigate their application to the generation of deep features conditioned on the particular needs of the user.
  • Catena Matteo  read more...
    In Web Search Engines (WSEs), a query can be rewritten in several ways to enhance effectiveness at the cost of a longer processing time, for example by applying term proximity or expansions. Selective Query Rewriting (SQR) predicts the processing times of possible rewritings to decide – on a per-query basis – which is the most effective one that can be executed in the allowed amount of time. To process rewritten queries, search engines can resort on ephemeral posting lists generated on-the-fly from an inverted index. Hence, SQR cannot be used with state-of- the-art query processing techniques, such as BlockMaxWand (BMW). To boost efficiency, BMW leverages posting lists' properties that are precomputed (e.g., upper scores). However, such information is unavailable for ephemeral posting lists. The visit, which will take place at the School of Computing Science, University of Glasgow, under the responsibility of Dr. Craig Macdonald. The project is to reconcile SQR with BMW by exploiting dedicated approximations of upper scores or analytical estimation of posting lists' cardinalities.
  • Big Data offer the capability of creating a digital nervous system of our society, enabling the monitoring and prediction of various phenomena in quasi real time. But with that, comes the need of nowcasting changes and events in nearly real-time as well. Choi and Varian introduced the term nowcasting to advocate the tendency of web searches to correlate with various indicators, which may reveal helpful for short term prediction. In the field of epidemiology, it was showed that search data from Google Flu Trends, could help predict the incidence of influenza-like illnesses(ILI). But as Lazer and al. notice, in February 2013, Google Flu Trends predicted more than double the proportion of doctor visits for ILI than the Center for Disease Control. The visit will take place at the Laboratory for the Modeling of Biological + Socio-technical Systems (Mobs Lab), Northeastern University. The project aims to establish a new approach to nowcast influenza by correlating sales time series with influenza data. During the collaboration with the Mobs Lab group, we aim in introducing an additional layer of information that derives from epidemiological models that could help us to describe better and to nowcast the evolution of the influenza spread and its peak.
  • Sabbadin Manuele read more...
    AR/VR applications got many attentions in recent years due to the development of new devices such as Microsoft Hololens (AR) and HTC Vive (VR). Even the mobile market has moved its attention into AR applications (ARKit, ...). The main goal of any AR application is to visualize or even interact with virtual objects inserted in the real surrounding world. In order to guarantee a correct and realistic visualization of the virtual object, a fundamental step is the computation of the amount of light that reaches each object point from the environment, simulating effects like indirect lighting, shadows and color bleeding. Imagine to have a white object inside your AR application and to put it near a red wall: even if the 3D model of the object is highly accurate the entire experience gets worse if it doesn’t get any shade of red. In the classical computer graphic field, several algorithms to compute the global illumination of a scene were proposed (Ray-tracing, Radiosity, PBGI, ...), but most of them are not real-time. The visit will take place at the Telecom ParisTech under the responsibility of Prof. Tamy Boubekeur. The project aims to make a real-time version of the Point Based Global Illumination (PBGI) algorithm and to use it in a AR/VR scenario, starting from a raw acquisition of the environment.

Recipients 2016

  • Customer-analyst interviews are considered among the most effective means to elicit requirements. Previous work on interviews studied the effect of specific variables on the success of interviews, as, e.g., domain knowledge, cognitive strategies, analyst’s experience. Recent work of the applicant studied the phenomenon of ambiguity in interviews, and showed that, when detected, ambiguity can help the analyst in disclosing the tacit knowledge of the customer. The project aims at laying the basis to increase the number of detected ambiguities. Based on the idea that ambiguities are perceived in different ways by different subjects, we investigate to which extent ambiguities identified by a listener (subject who listens to the interview audio) differ from those detected by the analyst. Ambiguities of the listener can be used to identify new questions to scope the (possibly tacit) knowledge of the customer. The project includes an experiment in which a set of customer-analyst interviews is performed with students of UTS by means of role-playing. Students will also play the role of listeners, and ambiguities detected by analysts and listeners will be compared. The project will be carried out at University of Technology Sydney (UTS), Faculty of Engineering and Information Technology, Requirements Engineering Research Laboratory focuses on Requirements Engineering (RE) research and training. Prof. Didar Zowghi, head of the laboratory, carries out research in empirical RE, and has a large experience in performing role-playing experiments with students, as the one proposed in this project.
  • This project focuses on the Random Indexing (RI) DSM method. RI has a strong theoretical support and, despite being a context-counting method, it scales linearly to arbitrarily large datasets. More interestingly, has recently echoed the relations between RI and (Convolutional) NN models. Since NN are the current state of the art in large-scale semantic processing of text data, we believe further investigating the relations between RI and NN is a key point to better understand NN-based DSM.The project will be carried out at the Swedish Institute of Computer Science (SICS), an independent non-profit research organization with a research focus on applied computer science. The institute conducts research in several areas, including big data analytics, machine learning and optimization, large scale network based applications, and human-computer interaction.
  • In this project, we intend to develop efficient and scalable solutions for real-time smart mobility systems by using information retrieval (IR) techniques for the management and query of spatio-temporal data. Typical examples of these kinds of queries include spatial predicates such as “covers” (e.g., “find metro stations in Paris”) or “intersects” (e.g., “find possible carpool rides”). The research will be carried out at the Terrier group at the University of Glasgow, a leading group in the areas of Information Retrieval and Smart Cities. The professors Craig Macdonals and Iadh Ounis have an excellent record of high-level publications in top conferences in these fields.
  • Danielis Alessandro  read more...
    The proposal is to improve an imaging (hardware/software) technique for the generation of images from data acquired by a HSI system, which is an integrated part of the Wize Mirror, the smart imaging prototype developed in EU-FP7 SEMEOTICONS project. This prototype is able to acquire and process multimedia data about a person’s face and translate these into measurements and descriptors automatically evaluated by computerized applications, providing an evaluation of an individual’s wellbeing conditions with respect to cardio-metabolic risk factors. The research will be carried out at the Department of Biomedical Engineering, Linköping University (LiU), Sweden, which research is carried out within the fields of biomedical optics, signal and image processing, ultrasound and bio-acoustics, modelling and simulation in physiology, neuro-engineering and knowledge-based decision support systems.
  • Aggregative data infrastructures consist in large ecosystems of data collection and processing services, possibly availing of advanced tools and workflows, used to build complex graphs of information in order to capture at best the scientific output of a given research community. The resulting data is an important asset for the community which demands for guarantees on its “correctness" and “quality" over time. However, the lack of assurances from aggregated data sources, the occurrence at any level of unexpected errors of any sort and the ever-changing nature of such infrastructures (in terms of algorithms and workflows) makes the presence of a monitoring strategy vital for delivering high quality results to end-users. For this purpose, the applicant’s PhD focused on the design of DataQ, a system providing a flexible way to outline monitoring scenarios, gather metrics from data flows and detect anomalies. This collaboration is intended for validating DataQ approach in CORE real-world application. The research will be carried out at the Knowledge Media Institute (KMi) of the Open University, based in Milton Keynes UK, which researches and develops solutions in the field of Cognitive and Learning Sciences, Artificial Intelligence and Semantic Technologies, and Multimedia. The advisor for the proposed collaboration is PhD Petr Knoth, founder and manager of CORE2, one of the most prominent European data infrastructures aggregating over 25 million records from about a thousand Open Access Repositories.

Recipients 2015

  • Ferreira de Miranda Breno read more...
    Test coverage information can help testers in deciding when to stop testing and in augmenting their test suites when the measured coverage is not deemed sufficient. Since the 70’s, research on coverage testing has been very active with much effort dedicated to the definition of new, more cost-effective, coverage criteria or to the adaptation of existing ones to a different domain. All these studies share the premise that after defining the entity to be covered (e.g., branches), one cannot consider a program to be adequately tested if some of its entities have never been exercised by any input data. However, it is not the case that all entities are of interest in every context. This presentation will provide some intuition on how we can redefine coverage criteria so to focus on the program parts that are relevant to the testing scope. In particular, I will discuss how our notion of contextualized coverage could be applied to the context of small Unmanned Aerial Vehicles (UAVs).
  • The research to effectively represent visual features of images has received much attention over the last decade. Although hand-crafted local features (e.g SIFTs and SURFs) and their encodings (e.g Bag of Words and Fisher Vectors) have shown high effectiveness in image classification and retrieval, the emerging deep Convolutional Neural Networks (CNN) have brought about breakthroughs in processing multimedia contents. In particular, the activations produced by an image within the intermediate layers of a CNN have been found notably effective as high-level image descriptors. This presentation will provide a comparison of several image representations for visual instance retrieval, including CNN features, Fisher Vectors, and their combinations. Two applications will be presented: the visual recognition of ancient inscriptions (such as Greco-Latin epigraphs), and the retrieval of human motions using RGB representations of spatio-temporal motion capture data.
  • Coletto Mauro read more...
    On-line social networks are complex ensembles of inter- linked communities that interact on different topics. Some communities are characterized by what are usually referred to as deviant behaviors, conducts that are commonly considered inappropriate with respect to the society’s norms or moral standards. Eating disorders, drug use, and adult con- tent consumption are just a few examples. We refer to such communities as deviant networks. It is commonly believed that such deviant networks are niche, isolated social groups, whose activity is well separated from the mainstream social-media life. According to this assumption, research studies have mostly considered them in isolation. In this work we focused on adult content consumption networks, which are present in many on-line social media and in the Web in general. We found that few small and densely connected communities are responsible for most of the content production. Differently from previous work, we studied how such communities interact with the whole social network. We aim at setting the basis to study deviant communities in context.
  • In forensics investigations, several digital sources exist that can be used to provide evidence for a crime case, from CCTVs to NFC readers, and from network routers to PC hard-disks. Inspecting all the potential sources that might be relevant to the crime can be time consuming. Moreover, different sources might provide similar case-relevant information, whilst requiring different costs of inspection, both in terms of time, and in terms of resources to be involved in the inspection. Means are therefore required to prioritise the evidence sources based on their relevance and cost. The seminar will present the research currently performed towards the solution of this problem. In particular, we will outline the different concepts of relevance available in the literature, and we will describe the notion of evidential relevance applied to digital forensics investigations. In addition we will discuss the ongoing work concerning the usage of Bayesian Network to estimate the relevance of evidence sources during forensics investigations, and potential directions to combine relevance and cost estimates.
  • A lot of research efforts have been spent in the latest years to devise effective solutions to allow machines understand the main topics covered in a document. Several approaches have been proposed in literature to address this problem, the most popular is Entity Linking (EL), a task consisting in automatically identifying and linking the entities mentioned in a text to their URIs in a given knowledge base, e.g., Wikipedia. Despite its simplicity, the EL task is very challenging due to the ambiguity of natural languages. However, not all the entities mentioned in a document have the same relevance and utility in understanding the topics being discussed. Thus, the related problem of identifying the most relevant entities present in a document, also known as Salient Entities, is attracting increasing interest and has a large impact on several text analysis and information retrieval tasks. This seminar will focus on a novel supervised technique for comprehensively addressing both entity linking and saliency detection. We found that blending together the two tasks makes it possible both to improve the accuracy of disambiguation and to exploit complex and computationally expensive features in order to detect salient entities with high accuracy. In addition, we will outline the strategy adopted to build a novel dataset of news manually annotated with entities and their saliency.
  • Face recognition is an appealing and challenging problem in many research fields. Several approaches have been proposed in literature to address this problem. Recently, the diffusion of computationally powered GPU and the availability of large quantity of data, have enabled the exploitation of machine learning techniques to address the problem of face recognition. In particular, Deep Convolutional Neural Networks have been successfully used in several application domains, like for example image understanding, classification tasks, speech recognition. This presentation will show a Deep Learning approach to the face recognition problem.

Recipients 2014

Recipients 2013

  • In the field of satellite access networks, the need for optimization studies in hybrid access schemes with interference cancellation is a trend topic. Studying and evaluating the suitability of either a random (RA) or a dedicated access (DA) in a hybrid scheme, like in DVB-RCS2, is the core part of the planned activity. A hybrid scheme can optimize the allocation of resources, ensuring an efficient use. In case of elastic traffic and/or variable-bit-rate (VBR) data flows, the main objective is identifying a class of metrics able to drive the switch from a random to a dedicated access (and vice versa).
    The activity will be focused on the study of the implementation of random access protocols available at the German Aerospace Center (Deutsches Zentrum für Luft - und Raumfahrt DLR - Institut für Kommunikation und Navigation Satellitennetze - Weßling, Germany), under the supervision of Prof. Tomaso De Cola, with the objective of fully understand, eventually extend and use it for specific simulations test-beds. Also classification algorithms may be part of the activity, to classify input traffic and then to decide whether transmitting data following a RA or a DA protocol. Higher layer and lower layer classification algorithms are crucial to ensure the optimal utilization of the available bandwidth by reducing the dramatic fall of the throughput when the wrong transmission scheme is selected.
  • Ceccarelli Diego read more...
    The largest part of Web documents currently does not contain semantic annotations, and they are commonly modeled as simple bags of words. One of the main approaches for enhancing search effectiveness on these documents consists in automatically enriching them with the most relevant related entities. Given a plain text, the Entity Linking task consists in identifying small fragments of text (in the following interchangeably called spots or mentions) referring to any entity (represented by a URI) that is listed in a given knowledge base. Usually the task is performed in three steps: i) Spotting: a set of candidate mentions is detected in the input document, and for each mention a list of candidate entities is retrieved; ii) Disambiguation: for each spot associated with more than one candidate, a single entity is selected to be linked to the spot; iii) Ranking: the list of entities detected is ranked according to some policy, e.g., annotation confidence. Our proposal is to develop a method for detecting important entities mentioned in a document. We refer to this as to the Entity Ranking problem. We plan to use an existing or a new text collection for creating a benchmark for Entity Ranking by manually labeling the entities mentioned in a given document with an relevance score. The resulting datasets will be available to the research community. Then, we will design an improved algorithm able to link documents to entities, and to rank the entities according to their importance.
  • Marcheggiani Diego read more...
    One of the main problems in the world of machine learning is the shortage of annotated training data. Annotating data is an expensive process where a lot of human effort is employed. This activity is even more expensive when we need training data for structured learning problems, where an instance is composed by several annotations, such as in the sequence labeling problems. The activity that I am going to do will be study new techniques aimed to minimize the human effort involved in the annotation of training data for sequence labeling problems. This new techniques will merge together strategies of active learning, semi-supervised learning and transfer learning in the domain of learning algorithms for sequence labeling, such as Conditional Random Fields and Hidden Markov Models. This research activity will be carried out at the Université Pierre et Marie Curie, Laboratoire d'Informatique Paris 6 (LIP6), in Paris, under the supervision of Professor Thierry Artières.
  • In the field of oceanographic engineering, the demand for machine learning integration as a support to the design and development of autonomous underwater vehicles (AUVs) is rapidly increasing. The main goal of this work is to provide methods to be exploited for optimal surveys of the marine environment. It will focus on multi-sensor (namely optical and acoustic) data capturing and processing methods specifically devised for underwater scene understanding purposes. The research activity will be carried out at the Ocean Systems Lab (OSL), a leading center of excellence in Underwater Robotics and Underwater Signal and Image Processing, at the Heriot-Watt University (Edinburgh, Scotland), under the supervision of Prof. David M. Lane. Results concerning the analysis of data provided by multi-sensor platforms in underwater environments are expected. Methods for processing acoustic and optical data aiming at large scale map creation, pattern recognition, 3D reconstruction and algorithm/data fusion will be explored and pursued.
  • 3D printing is becoming a pervasive technology in a variety of applications. The main advantage of these solutions is that the structural complexity of the printed objects does not affect time and cost of the printing process. 3D printing democratized the concept of industrial production, indeed today it is possible to use 3D printing to manufacture arbitrary shapes. One limitation of the existing solutions is that they allow the manufacturing using a single material only (e.g. ABS, PLA, resin, etc.). This reduces considerably the application domains, because it is also desirable the creation of 3D-printed pieces with continuously varying physical properties. Although multimaterial 3D printers are increasingly available, they are extremely expensive and they rely on a limited set of materials. After an overview of the 3D printing technologies, we will propose a novel method that, using geometric patterns, aims to approximate the physical behavior of a given material employing single-material 3D printing.
  • The current acquisition pipeline for visual models of 3D worlds is based on a paradigm of planning a goal-oriented acquisition. The digital model of an artifact (an object, a building, up to an entire city) is produced by planning a specific scanning campaign, carefully selecting the acquisition devices, performing the on-site acquisition at the required resolution and then post-processing the acquired data to produce a beautified triangulated and textured model. However, in the future we will be faced with the ubiquitous availability of sensing devices; these devices deliver different types of data streams (for example smartphones, commodity stereo cameras, cheap aerial data acquisition devices, etc.) that need to be processed and displayed in a new way.
    The availability of this huge amount of data requires a change in the acquisition and processing technology: instead of a goal-driven acquisition that determines the devices and sensors, the acquisition process is determined by the sensors and resulting available data. One of the main challenge is the developing of the ultra-fast geometry processing operators applied on the fly on the continuous 3D data input streams. It requires two tasks. The first one is the design of basic geometric operators to achieve low computational and memory complexity and fine-grained parallel execution, exploiting recently introduced adaptive GPU geometry- related stages (e.g., tessellator unit, compute shaders, free format intershader communication) and hybrid CPU-GPU runtime kernels. The second task is the use of these basic operator to develop new algorithms for the geometry processing methods, allowing to filter data, resample it to a chosen resolution and equip it with a surface structure using an as small as possible amount of time and memory. The research activity will be carried out at the institute Telecom Paris Tech in Paris under the supervision of the Prof. Tamy Boubekeur.
  • Researchers recently discovered that traditional mobility models adapted from the observation of particles or animals (such as Brownian motion and Levy-flights), and recently from the observation of dollar bills, are not suitable to describe people's movements. Indeed, at a global scale humans are characterized by a huge heterogeneity in the way they move, since a Pareto-like curve was observed in the distribution of the characteristic distance traveled by users, the so called radius of gyration. Although these discoveries have doubtless shed light on interesting and fascinating aspects about human mobility, the origin of the observed patterns still remains unclear: Why do we move so differently? What are the factors that shape our mobility? What are the movements or the locations that mainly determine the mobility of an individual? Can we divide the population into categories of users with similar mobility characteristics? The above issues need to be addressed, if we want to understand key aspects about the complexity behind our society. In the current work, we exploit the access to two big mobility datasets to present a data-driven study aimed to detect and understand the main features that shape the mobility of individuals.
    By using a dualistic approach that combines strengths and weakness of network science and data mining, we showed the mobility of a significant portion of population to be characterized by the patterns of commuting between different and distant mobility "hearts".