Reconstruction of N-way Arrays by Partial Sampling

Day - Time: 30 November -0001, h.00:00
Place: Area della Ricerca CNR di Pisa - Room: C-29
Speakers
  • Cesar Federico Caiafa (LABSP-Brain Science Institute, RIKEN, Japan)
Referent

Emanuele Salerno

Abstract

In modern signal processing, data mining and other applications, we are frequently faced to the problem of dealing with huge-size datasets (large-scale problems). Moreover, datasets usually have a multi-way (N-way) array structure where each dimension (mode) has a particular physical meaning such as time, frequency, space, etc. N-way arrays (tensors) generalize the mathematical concept of vectors and matrices to higher dimensions (N>2) and have some attractive properties that make them useful for representing and processing N-dimensional data by decomposing them in factors. While many advances were achieved on algorithms for tensor decomposition during the last decade, nowadays one of the main challenges is to keep the computational load as low as possible achieving fast algorithms and using limited memory resources.

In this talk, an introduction to multi-way decomposition tools will be presented with focus on 'Tucker models', one of the possible generalization to higher dimensions of the useful Singular Value Decomposition (SVD) of matrices. We will present some recent results and a new algorithm, namely the Fiber Sampling Tensor Decomposition (FSTD) algorithm [1], that allows one to reconstruct an N-way array (with arbitrary N>2) from a reduced subset of its entries. As a generalization of the classical column-row matrix decomposition (also known as CUR or 'skeleton' decomposition), which approximates a matrix from a subset of its rows and columns, our result provides a new method for the approximation of a high dimensional tensor by using only the information contained in a subset of its 'n-mode fibers' (n=1,2,..,N). The properties of this method will be analyzed and discussed. We will also discuss about its potential applications for signal processing for cases where datasets are highly redundant, i.e. they can be can be compressed by using a low rank tensor decomposition such as the case of the 'Tucker model'. Simulation results will be shown to illustrate the properties and the potential of this method.