Faculty       Staff       Contact       Institute Directory
Research Groups      
Undergraduate       Graduate       Institute Admissions: Undergraduate | Graduate      
Events       Institute Events      
Lab Manual       Institute Computing      
No Menu Selected

* Research

Ph.D. Theses

Topics in Matrix Sampling Algorithms

By Christos Boutsidis
Advisor: Petros Drineas
April 22, 2011

We study three fundamental problems in Linear Algebra and Machine Learning, namely:

  1. Low-rank Column-based Matrix Approximation,
  2. Coreset Construction in Least-Squares Regression, and
  3. Feature Selection in k-means Clustering.

A high level description of these problems is as follows: given a matrix A and an integer r, what are the r most "important" columns (or rows) in A? A more detailed description is given momentarily.

  1. Low-rank Column-based Matrix Approximation. We are given a matrix A and a target rank k. The goal is to select a subset of columns of A and, by using only these columns, compute a rank k approximation to A that is as good as the rank k approximation that would have been obtained by using all the columns.
  2. Coreset Construction in Least-Squares Regression. We are given a matrix A and a vector b. Consider the (over-constrained) least-squares problem of minimizing ||Ax-b||, over all vectors x in D. The domain D represents the constraints on the solution and can be arbitrary. The goal is to select a subset of the rows of A and b and, by using only these rows, find a solution vector that is as good as the solution vector that would have been obtained by using all the rows.
  3. Feature Selection in K-means Clustering. We are given a set of points described with respect to a large number of features. The goal is to select a subset of the features and, by using only this subset, obtain a k-partition of the points that is as good as the partition that would have been obtained by using all the features.

We present novel algorithms for all three problems mentioned above. Our results can be viewed as follow-up research to a line of work known as "Matrix Sampling Algorithms." [Frieze, Kanna, Vempala, 1998] presented the first such algorithm for the Low-rank Matrix Approximation problem. Since then, such algorithms have been developed for several other problems, e.g. Regression, Graph Sparsification, and Linear Equation Solving. Our little contributions to this huge line of research are:

  1. improved algorithms for Low-rank Matrix Approximation and Regression
  2. algorithms for a new problem domain (K-means Clustering).

* Return to main PhD Theses page



---