Constrained Low-Rank Approximation
Koby Hayashi, Georgia Institute of Technology
Low Rank Approximation (LRA) is one of the oldest approaches for extracting latent structure from data sets. Nonnegative Matrix Factorization (NMF) is a popular LRA method for, usually, nonnegative matrices. The distinguishing factor of NMF is that it forces the computed low-rank factors to be nonnegative. Intuitively this helps keep the low-rank factors in the same application space as the original data aiding with interpretability. For example, one’s matrix might contain count data, probabilities, nonnegative spectra, pixel values, etc. For large data sets fast and scalable algorithms are needed. We develop such algorithms for NMF using two paradigms: scaling out and scaling down. When scaling out one uses more compute nodes for a problem.
In this vein we develop a framework, Parallel Low-rank Approximation with Nonnegativity Constraints (PLANC) that implements several parallel algorithms for computing NMF and its variants, including sparse NMF, Nonnegative Tensor Factorization (NTF or NNCP), Symmetric NMF and Joint NMF. When scaling down we reduce the problem size. Our approach here is to use recent methods developed for Randomized Numerical Linear Algebra. We developed two robust methods for NMF using random sketches to compress the input data and another to approximately solve Least Squares problems. We benchmark our data on synthetic and real world data sets, including neuroimaging data and sparse graphs, achieving at least a 2x improvement in speed while maintaining, or sometimes improving, solution quality.
Abstract Author(s): Koby Hayashi