Ahmad Zaid Abassi


Viterbi Faculty of Electrical Engineering, Technion

Lower-Dimensional Representation: Compression, Faithfulness and Relevance

In this day and age where massive amounts of data are available to computers and humans alike, it is of prime importance to allow tractable manipulation of, and inference from acquired data, with the most fundamental and basic concept in Data Science being dimensionality reduction, which is, in essence, a controlled transformation of data to reduce the number of random variables needed to consider, for the purpose of making more efficient the data's representation and to avoid the undesirable phenomena which appear as dimensionality grows without bounds. While Principal Component Analysis (PCA) with its many variants constitutes the most basic technique of dimensionality reduction, many inherently different techniques exist for distinct purposes across multiple fields and disciplines, from rate-distortion (RD) and the Information Bottleneck (IB) in information theory to the Generalized Karhunen-Loeve transform (GKL) in signal processing, different techniques exist for distinct purposes and each technique requires its own theory and study, and new techniques are constantly being introduced for emerging purposes. In this work, we introduce Encumbered Principal Component Analysis (EPCA), in which we seek a low-rank linear approximation of specific data where the consideration is not only "faithfulness" to the original data, but also "relevance" towards predicting other data correlated with the original. We study analytically this new dimensionality reduction technique and the representation it affords us, comparing it qualitatively with standard PCA to glean insights into regularized lower-dimensional representations affording generalization (or learning in a sense). Our main tools for this purpose are from matrix analysis, perturbation theory, and linear algebra. Following the study of our new concept, we examine the possible benefits of it in the fields of Machine Learning and Artificial Intelligence. Finally, we introduce RELU Low-Rank Approximation (RELULRA), where we seek an optimal lowrank approximation of data where the lower dimensional representation is an argument of a rectified linear unit. We qualitatively study this problem and its solution as deeply as possible given its computational hardness. This part is of prime importance to the theory of deep learning for its intuitive and clear relation to optimal representations in deep neural networks, an important and unresolved problem.
Biography: * סטודנט לתואר שני בהנחיית פרופסור רון מאיר.

Date: Wed 23 May 2018

Start Time: 11:00

430 | Electrical Eng. Building