# An iterative linear algebra perspective on compressed sensing and matrix completion

The event is taking part on the

**Friday, Apr 20th 2018**at

**11.00**

Theme/s:

**Statistics**

Location of Event:

**Alan Turing Building, Room 306**

This event is a:

**Public Seminar**

Abstract: Compressed sensing and matrix completion are techniques by which simplicity in data can be exploited to allow for sensing at the information rate. Compressed sensing is typically cast as seeking to measure an unknown vector that has few non-zeros, but in unknown locations, and to make the measurements through the action of a matrix.

Compressed sensing establishes that when the vector to be measured is length n, but has k << n nonzeros, the measurement matrix need only have order k rows, which is the minimum order possible. Matrix completion is a similar question, but where the matrix measured is assumed to be of low rank, and one would like to recover the matrix from only few of its entries. Both compressed sensing and matrix completion have natural non-convex optimization formulations for the recovery of the vector and matrix respectively; typically the non-convex objections in these formulations are replaced with their convex envelope, and well studied convex optimization algorithms applied. The convex formulations are also amenable to detailed analysis, making quantitatively accurate estimates as to the number of measurements needed for the recovery using convex optimization. In this talk we consider an alternative perspective on compressed sensing and matrix completion, inspired by iterative linear algebra algorithms. We show shat the average case behaviour of surprisingly simple algorithms that directly try to solve the non-convex problem can perform as well or better than can the convex approximations.