Grants for ProgettISTI

The ISTI Grants for ProgettISTI enables its staff of less than 34 years old to carry out research in cooperation with collegues of different laboratories. It complements CNR similar programs.

Recipients 2017

  • Authors: Masetti Giulio,Robol Leonardo

    The increasing complexity of cyber-physical systems providing vital services to our everyday life exacerbates the challenge of devising effective methods to analyze their behaviour. Model-based approaches are well-suited to analyze complex systems from dependability and performability perspective, e.g., providing useful insights into how component's failures might propagate along interconnected infrastructures, possibly leading to cascading or escalating failures, and to quantitatively assess the impact of these failures on the service delivered to users. Generalized Stochastic Petri Nets (GSPNs) are commonly adopted as modeling formalism to design cyber-physical systems models, both in academia and industry. A GSPN can be represented as Markov chain, and the performability (performance + dependability) measures can be expressed relying on its probability vector p(t). The main bottleneck in these computations is the number of states of the Markov chain, which coincides with the length of the vector, that grows exponentially with the number of system components. Recently there have been some efforts devoted to exploit a tensor structure in p(t). Most of these results are targeted at computing the steady-state distribution and ignore the transient phase. Our goal is to extend tensor approximation techniques to efficiently store the vector p(t). This will enable the analysis of much larger Petri Nets, previously impractical because of the storage, and at a reduced computational cost. We plan to develop an easily accessible software package (TAPAS) that implements the computation of the most commonly used measures, hiding the complexity of the mathematical tools "behind the scenes".

Recipients 2016

  • Authors: Cintia Paolo, Girolami Michele

    The powerful tools of Data Science are disrupting sports world. The availability of cheaper and always smaller monitoring sensors opens up amazing scenarios for sports and performances improving. In this evolving environment, we propose the Machine-Training, an application for cyclists based on data-driven models providing a software personal trainer. By analyzing the efforts of a cyclist and thanks to the extremely precise measurements from heart-rate, power consumption and other biometric sensors, we aim to develop a system tailored to the needs of each individual rider. Cycling is an evolving sport, with lots of practitioners, from really young to older riders. The model we study and develop applies both for professional and amateur riders: data revolution has still entered the professional cycling world, but Data Science is yet to contribute to cycling performances analysis.

  • Authors: Germanese Danila, Palumbo Filippo

    In line with the effects of the aging society, the number of elderly with dementia is increasing. Dementia represents a chronic neurodegenerative disease which symptoms are associated with memory decline and other cognitive abilities impairment that lead to being not able to do everyday things. Moreover, some people become worried, angry, distressed, and violent. The dementia syndrome is one of the most burdensome conditions not only for the patients, who personally live such condition but also for the caregivers. Therefore, novel approaches to at-home care, aimed to lower the burden of caregivers, are urged to be researched. Doll therapy is a non-pharmacological dementia care that can help improve the mental status of the elderly. The endorsement of the so-called “Empathy Doll” may allow these ProgettISTI 2016 — 4/24 patients to focus their attention on a very simple task - caring for a doll - so avoiding all those confused thoughts that would crowd their mind and that are the reason for their behavior disorders. Several studies aimed at assessing the impact of the Doll Therapy on severe dementia patients. Despite the presence of a plethora of anecdotal and experimental evidence about its benefits, this is usually based on occupational therapists’ observational, subjective measures, and non-rigorous procedures. In the EMPATHY project, we propose a sensorized and networked doll for improving the assessment and validation of this non-pharmacological dementia care. Sensory data are elaborated by means of machine learning techniques that provide information on the use of the doll during the therapy and on its effects on the patient, in terms of stress levels. The doll will be also able to provide information about the general patient’s psychophysical state and its networking capabilities will provide remote monitoring of the effectiveness of the intervention by the primary (specialists) and secondary (relatives) caregivers, thus improving the quality of care and easing the burden for caregivers.

  • Authors: Banterle Francesco, Moreo Fernandez Alejandro David

    The task of Automatic Exposure Bracketing (AEB) consists of merging Low Dynamic Range (LDR) images at different exposures into a High Dynamic Range (HDR) image capturing all scene details. Since each shot is captured in a slightly different moment, AEB has to face two challenging problems: image alignment, i.e., correcting the (typically many) pixel misalignments; and deghosting, i.e., correcting the partially transparent or missing features resulted from moving objects. Our proposal emerges from the observation that LDR shots can be thought as a well-defined sequence with exposure and time being both dependent on the order in the sequence. Therefore, we plan to investigate the potential benefits for AEB that will likely result from the combination of Deep Convolutional Neural Networks (ConvNets) with Long Short-Term Memory (LSTM) networks. ConvNets and LSTM are two recently emerged Deep Learning (DL) architectures that are particularly fit to learn from images and sequential data, respectively. Since deep neural networks often require large datasets to deliver competitive results, we propose to synthesize a large dataset with realistic rendered images for the task. With the adequate net architecture and enough training data, our expectation is to improve AEB results in quality and time performance. This is because once the model parameters are optimized, the image generation is reduced to a simple forward pass through the net computations. We believe our architecture could be an interesting contribution not only in AEB but also in the broader field of per-pixel prediction methods such as panorama stitching, medical image registration, etc.