HAMLET

Hardware Acceleration of Machine LEarning Tasks

Contacts
Abstract

Recent advances in Machine Learning (ML) have enabled new solutions for modeling complex phenomena that were previously too complex for computers to handle. This radical change has opened new horizons to business and science for improving the quality, speed and accuracy of models in all parts of human existence. The complexity of machine-learnt models and their widespread use requires however novel algorithmic solutions, aimed at rendering fast and scalable both the learning phase and the use of the learnt models in large-scale applications.

This proposal aims at investigating novel approaches to make ML models efficient and scalable. The proposing groups have met on the basis of interests, needs and complementary skills of its institutions.  The Argentinean team (UNSL) has experience in parallel models and platforms including SoC based FPGAs. The Italian team (ISTI-CNR) has experience in algorithmic solutions for large-scale machine learning in web-scale applications. We propose to exploit our complementary expertise to investigate how hardware acceleration platforms such as System on Chip (SoC) based Field-Programmable Gate Arrays (FPGAs) can offer interesting and diverse characteristics from traditional CPU to make ML models fast and scalable.  The proposed work plan is pragmatically designed to maximize the possibility of success: first year’s goal is matching proposers skills in the development of an innovative SoC solution for additive ensembles of regression trees. The collaboration on this specific case study will allow the proposal to fully evaluate the impact of hardware acceleration on the efficiency of ML models for classification, regression and ranking tasks. During the second year the project will generalize the lessons learnt from the first case-study to investigate how FPGAs can be successfully exploited to accelerate other computationally-intensive ML tasks. 


Duration

26 Months

Financial Institution

Istituzionale