# http://colinraffel.com/wiki/

### Site Tools

elen_e6886_course_notes

# ELEN E6886 Course Notes

## Motivation

There is a huge amount of data being generated on the internet, and in scientific experiments, etc. For some data, we may need to sample it and compress it to represent it, and we may want to predict new data, label it, and cluster/classify data. Most of this data is very high dimensional - images contain over a million pixels, videos contain over a billion voxels. We may also be dealing with very large datasets. Classically, we have $n \rightarrow \infty$ samples in $p$ (fixed) dimensions, and there are a great number of tools used to help decide whether predictions we make are reliable. Experiments could also be designed to produce reliable and complete data. Today, we have a situation where sometimes we have about as many samples as dimensions (and sometimes more) and in many cases the data is incomplete or erroneous. When data is so high dimensional, we require many more observations to learn functions.

### Sparse approximations

We can often represent the observations by transforming the data to a different basis which may be sparse - most of the coefficients are close to zero. A sparse approximation of the signal would be one where the input $y$ is approximated as the sum of some elements taken from some dictionary (matrix) $\psi$, where most of the sum coefficients are zero $y\approx\psi x$, most of $x$ is zero. This depends on the choice of the dictionary - if it's good, most coefficients will hopefully be close to zero.

## Linear combination representation

We will often represent a vector as the linear combination of entries in a matrix as $y = Ax$ and $y_i = \langle A_i, x \rangle$ where $y \in \Re^m$, $x \in \Re^n$, $n > m$ and $a \in \Re^{m \times n}$, $\operatorname{dim}(\operatorname{Null}(A)) \ge n - m$. We have fewer observations than unknowns in the system of equations - there are many continuous signals compatible with the observations. We obtain the matrix $A$ from some transformation or by learning from a large amount of data. We can also add in an undetermined term by $y = Ax + e = [AI][x,e]$.

### Recovering vectors

If there's a particular signal $x$ you're trying to recover, it could be difficult because there are many solutions to the equation. To overcome this, we need to make a structural assumption about $x$. We may assume that $\|x\|_0 = {i \mid x_0(i) \ne 0}$ is small - the vector is sparse. We can also require that the vector has “structured sparsity” where certain parts of it need to be more sparse than others. Often we will not assume that $x$ is sparse, but that $x = \psi z$ and $z$ is sparse so that we have $y = A\psi z$ where we now have a new unknown $z$. In order to solve the system of equations, $x_0$ being sparse indicates that we can choose the solution which gives a maximally sparse $x_0$ - minimizes $\|x\|_0$ (the $l^0$ norm) - constraint $P0$.

#### Norms

A function is a norm if it's nonnegative homogeneous $\forall \alpha, \|\alpha x\| = |\alpha\||x|$, positive definite $\forall x, \|x\| \ge 0$, and satisfies the triangle inequality $\|x-y\| \le \|x\| + \|y\|$. Some common norms are the Euclidian norm $\|x\|_2 = \sqrt{\sum_i x_i^2}$, the Manhattan distance $\|x\|_1 = \sum{|x_i|}$ and $\|x\|_{\infty} = \max(x)$. The unit ball (the set of points of distance 1 from a fixed central point) of each of these norms are the circle, triangle, and square… all other $\|x\|_p$ are somewhere in between depending on $p$. The $l_0$ as defined above is not nonnegative homogeneous because multiplying by a scalar does not change the number of nonzero entries, but it's still called a norm. It can be defined by $\lim_{p\rightarrow 0} \|x\|_p^p = \lim_{p\rightarrow 0} \sum_i{|x_i|^p} = \|x\|_0$.

### Feasibility of finding solutions (spark)

If $y = Ax_0$, and $y = Ax^\prime$ and $\|x^\prime\|_0 \le \|x_0\|_0 = k$ then we have $A(x_0 - x^\prime) = 0$ so $x_0 - x^\prime = x$ is in the null space of $A$ and is sparse itself, $\|x\|_0 \le 2k$. In order to show that this second solution $x^\prime$ can't exist, it is sufficient to have that every member of $\operatorname{Null}(A)$ is more than $2k$-sparse so that $\|x\|_0 \not\le 2k$. We can define $\operatorname{spark}(A) = \min\{\|v\|_0}} \mid v \ne 0, v \in \operatorname{Null}(A)\}$, the number of nonzero entries in the sparsest nullvector of $A$. So, whenever $y = Ax_0$ and $\|x_0\|_0 < \frac{\operatorname{spark}(A)}{2}$ then $x_0$ is the unique sparsest solution of $y = Ax_0$. If the spark is large, then every null vector is dense and we can recover any sparse solution of $y = Ax$.

#### Examples

1. Say $A$ with dimensions $m,n$ is chosen at random as $A_{i, j} \sim_{iid} \mathcal{N}(0,1)$ the $\operatorname{spark}(A) = m + 1$ (very large). So for generic matrices the spark is pretty big, which makes recovery easier.
2. The matrix consisting of an identity matrix appended to the discrete Fourier basis will have $\operatorname{spark}$ of $2\sqrt{m}$ when $m$ is square.
3. An identity matrix with a single column vector of zeros has a $\operatorname{spark}$ of $0$.

### P0 is NP-hard

A naive solution to P0 could be found with an exhaustive search, which takes exponential time. P0 is an NP-hard computation problem - it can't be solved efficiency. To prove, it can be made equivalent to the exact 3-cover problem, where you have a set $S = \{1, \ldots, m\}$ and subsets $C_1, \ldots, C_n$ and $|C| = 3$ where the subsets contain each element of $S$ exactly once. If we construct $A_{i,j} = 1, i \in C_j$ and $0$ elsewhere and $y = 1_m$ (the $m$-dimensional vector of ones), then $y = Ax$ has a solution x with $\|x\|_0 < \frac{m}{3}$ if and only if there exists an exact 3-cover. In one direction, if there exists an exact 3-set cover $C^\prime$, then $|C^\prime| = m/3$ because there is one set for every three of the $m$ elements in $S$. Then if we create $x$ with $x_j = 1, U_j \in C^\prime$ and 0 otherwise, multiplying $Ax$ will solve the equation $Ax = y$ because $x$ will “select” solely the ones in $A$, generating the vector of 1s $y$. In the other direction, if we assume there is a solution $x_0$ with $||x_0||_0 = m/3$ and set $C^\prime = \{U_j \mid x_0(j) \ne 0\}$ (again “selecting” subsets according to the nonzero entries of $x_0$). Each column of $A$ has three nonzero entries (corresponding to the three integers in $U_j$) and $A_I$ has at most $m/3$ columns (corresponding to the nonzero entries of $x_0$ so $A$ has at most $m$ nonzero entries. $A_Ix_0(I) = y$ because we are just omitting the entries of $x_0$ which are $0$ (which would not count towards the multiplication anyways). So each row of $A_I$ must have at least one nonzero entry. There are $m$ rows in $A$ so each row must have exactly one nonzero entry. This indicates that $C^\prime$ covers $S$ because each integer will only appear once in each of $U_j$.

### Convex functions

A set $C$ is convex if $\forall x, y \in C, \alpha \in [0, 1], \alpha x + (1- \alpha)y \in C$. For a function defined on the convex set $D \in \Re^n$ such that $f: D \rightarrow \Re$, $f$ is convex if it's “bowl-shaped”: if $\forall x, y \in D, \alpha \in [0, 1], f(\alpha x + (1-\alpha)y) \le \alpha f(x) + (1 - \alpha)f(y)$. Note that we are essentially plugging in the linear interpolation of $x$ and $y$ and testing that it's below the linear interpolation of $f(x)$ and $f(y)$. We can alternatively say that the “epigraph” (the set of all points above the function $\operatorname{epi}(f) = \{(x, t) \mid x \in D, f(x) \le t\}$) is a convex set. Graphically, if the interpolation between two points on a function lie above the function, it's convex; if the line between two points in a set lies within the set, it's convex. When optimizing, if the function is not globally convex, there's no way of knowing that you have found a maximum or minimum - if a function has local maxima or minima, it is not convex. We can say that an optimization problem $\min f(x) \mid x \in C$ is convex if $f$ is a convex function and $C$ is a convex set, and the problem can be solved globally.

### Relaxing P0

We want to find a convex function which has many of the same properties as the $l^0$ norm except that it's convex so that it can be solved globally - a “convex surrogate”. Given some index set $B$, and a collection of convex functions $f_\beta(x) \mid \beta \in B$, we can define the piecewise supremum $g(x) = \sup_\beta f_\beta (x)$ and $g$ is convex. Define the convex envelope of a function $f(x)$ over a set $D$ as $g(x) = \sup\{h(x) \mid h \operatorname{convex}, h(y) \le f(y) \forall y \in D\}$. You look at all the convex functions which lie below $f$ and take their maximum - the “best convex underestimator” of $f$ - the largest convex function which is never larger than $f$. For the $l^0$ norm, we are looking for any convex function $h$ which lies below all points on the $l^0$ norm. In one dimension, $|x|$ is the convex envelope of $\|x\|_0$ over $[-1, 1]$. More generally, the convex envelope of $f(x) = ||x||_0$ over $B_\infty = \{x \mid \|x\|_\infty = 1\}$ is $g(x) = \|x\|_1$, which we can prove by induction. This motivates looking at the $\|x\|_1$ as a good estimator for the $l^0$ norm. This is essentially saying that given the affine solution space, we should look for the member in the space whose norm is the smallest. Using the $l^1$ norm can be thought of as expanding the unit ball until it hits the affine solution space. The $l^1$ norm is the convex norm which is the “closest” to $l^0$. If we used $l^2$, it's possible that a non-sparse solution could be chosen. In higher dimensions, it's possible that the $l^1$ norm ball also hits a point which is not sparse - we always want the point in the solution space on a coordinate axis. As you raise the dimension, the $l^1$ unit ball becomes relatively “thinner” so this becomes less of a problem.

### Recovering the sparsest solution

Under certain circumstances, we can recover the sparsest solution $x_0$ to $y = Ax$ by minimizing $\|x\|_1$. Intuitively, it may be easier to recover the sparsest solution if the columns of $A \in \Re^{m \times n}$ are “far apart” in the high dimensional space $\Re^m$. We can quantify the “closeness” of the columns as the coherence, $\mu(A) = \max_{i \ne j} \frac{|\langle A_i, A_j\rangle|}{\|A_i\|_2\|A_j\|_2}$.

#### Linear Algebra definitions

A symmetric ($M \in \Re^{k \times k} = M^\top$) matrix is positive semidefinite ($M \succeq 0$) if $\forall x \in \Re^k$, $x^*Mx \ge 0$. When a matrix is positive semidefinite, we can find $k$ eigenvalues $\lambda_1(M) \ge \cdots \ge \lambda_k(M) \ge 0$ and a corresponding orthonormal basis $U = $u_1 \mid \cdots \mid u_k$$ of eigenvectors such that $U^*U = I$ and $Mu_j = \lambda_ju_j$ and $M = U\Lambda U^*$ where $\Lambda$ is the $k \times k$ matrix with eigenvalues down its diagonal and zeros elsewhere. The $l^2$ operator norm is $\|M\| = \sup_{\|x\| = 1}\|Mx\|$. When $M \succeq 0$, $\|M\| = \lambda_1(M)$, the largest eigenvalue. If $M \succeq 0$ and $\lamda_k(M) > 0$, then $M$ is invertible and the eigenvalues of $M^{-1}$ are the reciprocals of the eigenvalues of $M$: $\|M^{-1}\| = 1/\lambda_k(M)$. When $M$ is diagonal, the eigenvalues are just the ordered diagonal elements because when multiplying it acts as a vector.

##### Gershgorin Disc Theorem

When $M$ has mostly small terms off of the diagonal, the eigenvalues are close to the terms on the diagonal: $|M_{ii} - \lambda| \le \sum_{j \ne i} |M_{ij}|$ for some $i$. Since $\lambda$ is an eigenvector, we have $Mx = \lambda x$. If $i$ is the index of element $x$ with largest magnitude, we have $\sum_j M_{ij}x_j = \lambda x_i$. Subtracting $M_{ii}$ from both sides gives $(M_{ii} - \lambda)x_i = \sum_{j \ne i} M_{ij}x_j$. Taking the absolute value, we have $|M_{ii} - \lambda| = |\sum_{j \ne i}M_{ij}x_j|/|x_i| \le \sum_{j \ne i}|M_{ij}||x_j|/|x_i| \le \sum_{j \ne i} |M_{ij}|$ because $x_j \le x_i$ so $|x_j|/|x_i| \le 1$.

### Constraints on the Spark

This theorem can help provide a bound on the $\operatorname{Spark}$ of a matrix (which can help decide if a given solution to $y = Ax$ is the unique sparsest solution). Specifically, we have $\operatorname{Spark(A)} \ge 1 + 1/\mu(A)$. Assuming that the columns of $A$ are normalized such that $\|A_i\|_2 = 1$, we have $\mu(A) = \max_{i \ne j}|\langle A_i, A_j \rangle|$ because the denominator (from the definition of $\mu(A)$ will be 1. If we have some $k < 1 + 1/\mu(A)$, then for any subset of size $I$, the columns of $A_I$ are linearly independent if and only if $M = A_I^*A_I$. Is k not some integer? Subset of what? $M$ is positive semidefinite why? and its entries are inner products of the columns of matrix $A$, such that the diagonal entires are $1$ and the off-diagonal elements are bounded by $\mu(A)$ because $\mu(A)$ is defined as the largest inner product between columns of $A$. By the Gershgorin Disc Theorem we have $|1 - \lambda_k(M)| \le \sum_{j\ne i}|M_{ij}|$ (replacing $M_{ii}$ with 1). The summation on the right has $k-1$ terms, each less than $\mu(A)$ so we have at most $|1-\lambda_k(M)| \le (k-1)\mu(A)$ and so $-(k-1)\mu(A) \le 1 - \lambda_k(M) \le (k-1)\mu(A)$, rearranging the right inequality gives $-\lambda_k(M) \le -1 + (k-1)\mu(A) \rightarrow \lambda_k(M) \ge 1 - (k-1)\mu(A)$. In order for $\lambda_k$ to be positive, we need $k < 1 + 1/\mu(A)$, which would imply that the columns of $A_I$ are linearly independent for any $I$ of size $k$ which sets a bound on the Spark. This gives us the constraint that whenever $x_0$ has at most $\frac{1}{2}(1 + 1/\mu(A))$ nonzero entries, it's the unique solution (because $\operatorname{Spark}(A)/2 \ge \frac{1}{2}(1 + 1/\mu(A))$ so if $\|x_0\|_0 < \frac{1}{2}(1 + 1/\mu(A))$ it's also going to be smaller than $\operatorname{Spark}(A)/2$).

### Finding the optimal solution to P1

Given a convex function $f : \Re^n \rightarrow \Re$, $x_0$ is the optimal solution to $\min f(x) \mid y = Ax$ if and only if there exists a $\nu$ such that $\nabla L(x_0, \nu) = 0$ where $L(x, \nu) = f(x) + \langle \nu, y - Ax \rangle$ is the Lagrangian. Requiring $\nabla L(x_0, \nu) = 0$ is equivalent to requiring that $A^*\nu = \nabla f(x_0)$ - we need to find a $\nu$ which satisfies this equation to guarantee that $x_0$ is optimal. $\|x\|_1$ is not differentiable, so we need to generalize the notion of the gradient $\nabla$. Any differentiable function can be linearly approximated with the gradient near any value $f(x) \approx f(x_0) + \langle f(x_0), x - x_0\rangle$; when $f$ is convex, this approximation serves as a lower bound $f(x) \ge f(x_0) + \langle f(x_0), x-x_0 \rangle \forall x, x_0$. We can approximate this condition with a “subgradient” $\gamma \in \Re^n$ of $f$ at $x_0$ if $f(x) \ge f(x_0) + \langle \gamma, x - x_0$, forming the subdifferential $\delta f(x_0) = \{\gamma \mid f(x) \ge f(x_0) + \langle \gamma, x - x_0 \}$. When a function is differentiable at $x_0$, $\delta f(x_0) = \{\nabla f(x_0)\}$; otherwise, many points may be included in the subdifferential. In $\Re^n$ we have $\delta |x|$ is $\gamma = 1$ when $x_i > 0$, $\gamma_i = -1$ when $x_i < 0$, and is $\gamma_i = [-1, 1]$ when $x_i = 0$ (in general, if a function $f : \Re^n \rightarrow \Re$ is the sum of $n$ functions $f_i : \Re \rightarrow \Re$ then $\delta f(x) = \{\gamma \mid \gamma_i \in \delta f_i(x_i) \}$). In P1, $x_0$ will be optimal if $\exists \nu \mid A^*\nu \in \delta f(x_0)$ because given another solution $x^\prime$, $Ax^\prime = Ax_0 = y \rightarrow A(x^\prime - x_0) = 0$. Plugging $x^\prime$ into the definition of the subgradient, we have $f(x^\prime) \ge f(x_0) + \langle \gamma, x - x_0 \rangle \rightarrow \|x^\prime\|_1 \ge \|x_0\|_1 + \langle \gamma, x - x_0 \rangle$. Since we require that $\exists \nu \mid A^*\nu \in \delta f(x_0)$, we have $\|x^\prime\|_1 \ge \|x_0\|_1 + \langle A^*\nu, x^\prime - x_0 \rangle = \|x_0\|_1 + \langle \nu, A(x^\prime - x_0)\rangle = \|x_0\|_1$, so $x^\prime$ is not “more optimal” than $x_0$. In addition, it can be shown that if $x$ is a solution to P1 with support (non-zero values) $I$, and $A_I$ has full column rank $I$, and $\exists \nu \mid A^*\nu = \operatorname{sign}(x_i), i \in I; |A_i^*\nu| < 1, i \in I^C$, then $x$ is the unique optimal solution. This implies that if $y = Ax_0$ with $\|x_0\|_0 < \frac{1}{2}(1 + 1/\mu(A)$, $x_0$ is the unique sparsest solution to P1.

### Better bounds

In general, we can do better than the result above to guarantee that a sufficiently sparse $x_0$ is the unique solution to P1, based on a further requirement of $A$: The Restricted Isometry Property, which is satisfied with order $r$, constant $\delta > 0$ if $\forall z \mid \|z\|_0 \le r, (1 - \delta)\|z\|_2^2 \le \|Az\|_2^2 \le (1 + \del)\|z\|_2^2$. In other words, for sparse ($\|z\|_0 \le r$ vectors, multiplying by $A$ does not change the $l^2$ norm a great deal (within $1 \pm \delta$). An linear map $L : V \rightarrow W$ is an isometry if $\forall v \in V, \|Lv\|_W = \|v\|_V$ ($\|\cdot\|_X$ is the norm for vector space $X$) - the “restricts” isometry to sparse vectors and is an approximation (the $\delta$ factor) of true isometry. It must be approximate because $A$ has vectors in its null space for which $\|z\|_2^2 \ne 0$. For some $r$, we can define $\delta_r(A)$ to be the smallest $\delta$ that satisfies RIP for $A$. Coherence implies that a matrix will have RIP of order $2$ with constant $r = \mu(A)$, or that every column submatrix of size 2 is well-conditioned because it is comparing two matrix entries at a time. The RIP of order $k$ ensures that every submatrix of $k$ columns is well-conditioned.

If $\delta_r(A) < 1$, every $r$ column submatrix of $A$ has full rank so $\operatorname{Spark}(A) > r$ because at least $r$ columns of $A$ are not linearly dependent. The RIP demands that the submatrices are full rank AND well-conditioned (a small change in $y$ will be reflected as a small change in $x$). If $\delta_{2k}(A) < 1$ and $y = Ax_0$ with $\|x_0\|_0 \le k$, $x_0$ is the unique solution to P0 - an alternate solution would violate the RIP. For P1, we have that if $\delta_{2k}(A) < \delta_*$, whenever $y = Ax_0$ with $\|x_0\|_0 \le k$, $x_0$ is the unique optimal solution to P1. $\delta_*$ is a constant whose value is unknown, estimated to be at least 0.46. If $\delta_{2k}$ for some $k$ is less than this quantity (implying a certain niceness for $A$), we can guarantee a unique optimal solution. Unfortunately, the RIP is not easy to assess for a given matrix $A$; it is really only useful when we can construct a random $A$ ourselves and make it have some properties that guarantee an RIP with very high probability.

### Non-sparse solutions and noisey observations

$x_0$ may not be perfectly sparse; instead, some terms may just be close to zero. In addition, we will always have some noise such that $y = Ax_0 + z$ where $z$ is the noise term and $\|z\|_2 \le \epsilon$. This modifies P1 to minimizing $\|x\|_1$ subject to $\|Ax - y\|_2^2 \le \epsilon^2$, which relaxes the equality constraint according to the amount of noise assumed to be present. This is equivalent to minimizing $\lambda\|x\|_1 + \frac{1}{2}\|Ax - y\|_2^2$, which requires some (unknown) calibrated relation $\epsilon \leftrightarrow \lambda$. In the presence of noise and inexact sparsity, the constraints get modified a little bit. If $y = Ax_0 + z$ with $\|z\|_2 \le \epsilon$ and $\delta_{2k}(A) \le \delta_*$, and we denote $$x$_k$ as the best approximation of $x$ where only $k$ entries are nonzero, and $\hat{x}$ is the solution to the modified P1 described above, then $\|\hat{x} - x_0\|_2 \le \frac{C}{\sqrt{k}}\|x_0 - $x_0$_k\|_1 + C^\prime \epsilon$ (are the C and C' terms constants?). In other words, the deviation caused by estimating $x_0$ is quantified according to how sparse $x_0$ is and how big the noise level $\epsilon$ is. When the noise term $z$ is random, we can achieve better bounds.

### Satisfying the RIP

Verifying that a matrix satisfies the RIP is very difficult to determine, but if the entries of the matrix are iid drawn from certain distributions, it can be argued that the matrix satisfies the RIP with high probability.

#### Example: Normal distribution

Say $A$ is a matrix whose entries are iid and drawn from $\mathcal{N}(0,1/m)$. Then each column will have an $l^2$ norm close to 1. $A$ will have an RIP of order $k$ with constant $\delta$ if for every subset $I$ of size $k$, $\sqrt{1 - \delta} \le \sigma_{\min}(A_I) \le \sigma_{\max}(A_I) \le \sqrt{1 + \delta}$, so we need to study the singular values of Gaussian random matrices. If $Q \in \Re^{p \times q}$ is iid $\mathcal{N}(0, 1)$ with $p > q$ then $\sqrt{p} - \sqrt{q} \le \mathbb{E}$\sigma_p(Q)$ \le \mathbb{E}$\sigma_1(Q)$ \le \sqrt{p} + \sqrt{q}$. So if $p$ is a lot larger than $q$, $\mathbb{E}$\sigma(Q)$$ will have strict bounds and thus will be very well conditioned (columns nearly orthogonal). The $m \times k$ matrix $A_I$ will have this property, so $1 - \sqrt{k/m} \le \mathbb{E}$\sigma_q(A_I)$ \le \mathbb{E}$\sigma_1(A_I)$ \le 1 + \sqrt{k/m}$ (is the scaling here done correctly?). If $k$ is much smaller than $m$ then this would help the RIP, but we need to consider all column submatrices and we need to see how the expectation of the singular values relates to their actual values. To determine how close the singular values are to their expectations, we need to determine their concentration of measure. We will see that for any $\delta > 0$, there exists $C(\delta), c(\delta) > 0$ such that if $m > C(\delta)k\log(ne/k)$ then $A$ has RIP of order k with constant $\delta$ with probability at least $1 - 2e^{-c(\delta)m}$ - a very high probability!

##### Lipschitz

A function is 1-Lipschitz if $\forall x, y \in \Re^D, |f(x) - f(y)| \le \|x-y\|$. If $z$ is a $D$ dimensional random vector with iid $\mathcal{N}(0, 1)$ random variables, and $f:\Re^D \rightarrow \Re$ is 1-Lipschitz, then $\mathbb{P}$f(z) > \mathbb{E}$$f(z)$ + t$$ \ge e^{\frac{-t^2}{2}}$ and $\mathbb{P}$f(z) < \mathbb{E}$$f(z)$ - t$$ \ge e^{\frac{-t^2}{2}}$. In other words, the Lipschitz-1 function will not deviate much from its expectation. Hopefully we can apply some similar bounds to the singular values of $A_I$.

##### Expectation of singular values of a Gaussian matrix

Considering the $j$-th singular value of a matrix $Q \in \Re^{m \times k}$ as a function from $\Re^{m \times k} \righatarrow \Re$. For any $j$, $|\sigma_j(P) - \sigma_j(Q)| \le \|P - Q\|_F$ - the singular values are 1-Lipschitz of $Q$. So if $Q$'s entries are iid $\mathcal{N}(0, 1)$ then $\mathbb{P}$\sigma_{\min}(Q) < \sqrt{m} - \sqrt{k} - s \lor \sigma_{\max}(Q) > \sqrt{m} + \sqrt{k} + s$ \le 2e^{\frac{-s^2}{2}}$. Then the matrix $A_I$ iid $\mathcal{N}(0, 1/m)$ can be generated by $A_I = (1/\sqrt{m})Q$; setting $s = t\sqrt{m}$ and dividing by $\sqrt{m}$ gives $\mathbb{P}$\sigma_{\min}(A_I) < 1 - \sqrt{k/m} - t \lor \sigma_{\max}(A_I) > 1 + \sqrt{k/m} + t$ \le 2e^{\frac{-mt^2}{2}$. As long as $m$ is large, the failure probability is small.

##### Dealing with all submatrices

If we sum the failure probabilities over all column submatrices $A_I$ the bound grows to $\binom{n}{k}2e^{\frac{-mt^2}{2}} \le 2e^{\frac{-mt^2}{2} + k\log(\frac{ne}{k}) }$.

##### Submatrix dimensions

The conditions described above are satisfied provided $k < cm/\log(n)$, which will guarantee that P1 recoveres solutions with about $m/\log(n)$ entries - better than $O(\sqrt{m})$.

### Johnson-Lindenstrauss

Consider a set of $p$ points $y_1, \ldots, y_p \in \Re^D$. We'd like to construct a mapping $f$ to project these points to a lower dimension space $\Re^d$ such that $d \ll D$ and $\forall i, j, (1-\epsilon)\|y_i - y_j\|_2 \le \|f(y_i)-f(y_j)\|_2 \le (1 + \epsilon)\|y_i-y_j\|_2$ - in other words, the distance between points is roughly conserved. The Johnson-Lindenstrauss lemma proves that for $d = \Omega(\log(p)/\epsilon^2)$, such a mapping exists, and that it is close to be nearly optimal to make it a random linear map. what is omega?

## Matrix Rank Minimization

The rank of a matrix is the dimension of its column or (equivalently) its row space. The columns of a matrix $Y$ lie in a $\operatorname{rank}(Y)$-dimensional linear subspace of the data space, which can be characterized by the matrix's singular value decomposition $U\Sigma V^* = \sum_{i = 1}^r\sigma_i u_i v_i^*$ such that $U^*U = I, V^*V = I$ and $\Sigma$ is the diagonal matrix of singular values $\sigma_1 \ge \sigma_2 \ge \cdots \le \sigma_r \ge 0$.

### Low-rank approximation

The optimization problem $\min_X \|X - Y\|_F$ subject to $\operatorname{rank}(X) \le r$ is solved by $X_* = \sum_{i = 1}^r \sigma_i u_i v_i^*$ (truncating the singular value decomposition sum of $Y$ to $r$ terms). The same solution is applicable to solving the low-rank approximation when the error is measured in the operator norm. The first $r$ singular values can be found in time $O(mnr)$ - polynomial time. This optimization can be used for Principal Component Analysis, which finds a best-fitting low-dimensional subspace. It can also be reformulated as the rank minimization problem $\min \operator{rank}(X)$ subject to $\|X-Y\|_F \le \epsilon$ - finding the matrix of minimum rank that is consistent with the observed matrix.

### Low-rank modeling

We seek to minimize the rank of $X$ by $\min \operatorname{rank}(X)$ subject to $\mathcal{A}[X] = y$ where $y \in \Re^p$ is an observation and $\mathcal{A} : \Re^{m \times n} \rightarrow \Re^p$ is a linear map. When $p < mn$, $\mathcal{A}[X] = y$ is underdetermined. $\mathcal{A}$ is arbitrarily defined as $\mathcal{A}$X$ = (\langle A_1, X\rangle, \ldots, \langle A_p, X \rangle)$.

#### Connection to P0

Note that $\operatorname{rank}(X) = \|\sigma(X)\|_0$, the spectral norm $\|X\|_{l^2 \rightarrow l^2} = \sup_{\|w\|_2 \le 1} \|Xw\|_2 = \sigma_1(X) = \|\sigma(X)\|_\infinity$, and the Frobenius norm $\|X\|_F = \|\sigma(X)\|_2$. Just like P0, rank minimization is intractable in the worst case, so we'd like a complex surrogate for $\operatorname{rank(X)}$. $\|\sigma(X)\|_1$ depends on the entries of $X$ in a very complex way, but as we just saw applying norms to the singular values gives valid norms on the matrix itself.

#### Unitary invariant norms

A norm $\|\cdot\|_\diamond$ is unitary invariant if for all matricies $R_1, R_2 \mid R_i^*R_i = R_iR_i^* = I$ and all $X$ we have $\|R_1XR_2\|_\diamond = \|X\|_\diamond$. Any function of $\sigma(X)$ is unitary invariant because $\sigma(X) = \sigma(R_1 X R_2)$ and any unitary invariant matrix norm can be written as a function of the singular values:

$\|X\|_\diamond$ is a unitary invariant matrix norm iff $\|X\|_\diamond = f(\sigma(X))$ for some “symmetric gauge function” $f : \Re^n \rightarrow \Re$ where $f$ is a norm on vectors, $f$ is permutation invariant, and $f$ is invariant under sign change.

#### The Nuclear norm

Define the nuclear norm as $\|X\|_* = \|\sigma(X)\|_1 = \sum_i\sigma_i(X)$. The nuclear norm is a convex function, and may serve as a convex surrogate for $\operatorname{rank}(X)$ because the nuclear norm is its convex envelope over the spectral norm ball $B = \{X \mid \|X\|\le 1\}$ of matrices with operator norm at most one.

#### Dual Norms

Given a vector space $V$ with norm $\|\cdot\|_\diamond : V \rightarrow \Re$ and inner product $\langle \cdot, \cdot \rangle : V \times V \rightarrow \Re$, define the dual norm $\|\cdot\|_\diamond$ to be $\|x\|_* = \sup_{\|y\|_\diamond\| \le 1} \langle x, y \rangle$. $\|x\|_1$ and $\|x\|_\infinity$ are dual norms. In the space $\Re^{m \times n}$ of matrices with inner product $\langle X, Y\rangle = \operatorname{tr}$X^*Y$$, the nuclear norm $\|X\|_*$ is the dual norm of the spectral norm $\|X\|$.

#### Nuclear norm minimization

The nuclear norm is convex and a good convex surrogate for the rank, and can be solved with a semidefinite program in polynomial time. To guarantee an optimal solution, we can say that $\mathcal{A}$ has a rank-restricted isometry property of order $r$ with constant $\delta$ if $\forall X \mid \operatorname{rank}(X) \le r, (1 - \delta)\|X\|_F^2 \le \|\mathcal{A}$X$\|_2^2 \le (1 + \delta)\|X\|_F^2$ and denote $\delta_r(\mathcal{A})$ as the smallest $\delta$ such that the rank-RIP holds. When $\delta_{2r} < 1$ and $y = \mathcal{A}$X_0$$ with $\operatorname{rank}(X_0) \le r$, $X_0$ is the only solution to $y = \mathcal{A}$X$$ with rank $\le r$. If $y = \mathcal{A}$X_0$$ with $\operatorname{rank}(X_0) \le r$ and $\delta_{5r} \le 1/10$ then $X_0$ is the unique optimal solution to the nuclear norm minimization problem. Unfortunately, some common problems do not satisfy the rank-RIP.

#### Applications

##### Latent Semantic Analysis

View each document in a collection of $n$ documents as a collection of words from a dictionary of size $m$, and compute a histogram of word occurrences as an $m$-dimensional vector $y_j$ where $y_{j, i}$ is the fraction of occurrences of the word $i$ in document $j$ and $Y = $y_1 | \cdots | y_n$$. Assume there's a set of topics $t_1, \ldots, t_r$ where each topic is a probability distribution on $$m$$. Each article may involve multiple topics, modeled as $p_j = \sum_{l = 1}^r t_l\alpha_{l, j}$ where $\alpha_{1, j} + \alpha_{2, j} + \cdots + \alpha_{r, j} = 1$. Assume $y_j$ is generated by sampling words independently at random from $p_j$ and computing a histogram. If the number of words is large, then $y_j \approx p_j$, so $Y \approx TA$ where $T = $t_1, \ldots, t_r$$ and $A = $\alpha_1, \ldots, \alpha_n$$. LSA computes the best low-rank approximation to $Y$ to use for search and indexing.

##### Recommendation

$m$ users consume some of $n$ products and rate them - we want to use the ratings to predict which products a user would rate well, producing a matrix $X \in \Re^{m \times n}$ which represents how well a user would rate each product. Setting $\Omega = \{(i, j) \mid$ user $i$ has rated product $j\}$, then we observe $Y = P_\Omega$X$$ where $P_\Omega$X$(i, j) = X_{ij}$ when $(i, j) \in \Omega$ and 0 otherwise - we want to fill in $Y$ to get $X$. Assuming $X$ is low-rank gives $\min \operatorname{rank}(X)$ subject to $P_\Omega$X$ = Y$.

### Convex Optimization Algorithms

Given the problem of recovering low-dimensional structure in high-dimensional space, certain formulations can provide a solution given some constraints - most frequently that the problem can be solved with some convex optimization. A great deal of work has been expended on developing efficient algorithms for convex optimization. The relevant algorithms should be able to scale to high dimensions, with cheap iterations, many of which will be required for convergence.

#### Relaxed L1 Minimization

We intend to solve $\operatorname{minimize} \lambda \|x\|_1 + \frac{1}{2}\|Ax-y\|_2^2$ instead of $\operatorname{minimize} \|x\|_1$ subject to $Ax=y$. This belongs to a general class of problems which attempts to minimize the sum $F(x)$ of a smooth function $f(x)$ and a nonsmooth function $g(x)$ - $\|\cdot\|_1$ is a nonsmooth regularizer and $\frac{1}{2}\|Ax-y\|_2^2$ is a smooth fidelity term.

##### Gradient algorithm for smooth functions

The simplest possible case, when $f(x) = 0$, makes $F$ differentiable, and gives rise to the gradient descent method: Start with an initial estimate $x_0$, and update it via the gradient $x_{k+1} = x_k - t_k\nabla F(x_k)$ until the estimate has converged. $t_k$ are step sizes. If $\nabla F$ is Lipschitz $\|\nabla F(x) - \nabla F(z)\|_2 \le L\|x - y\|_2$ for a constant $L$ we can set $t_k = 1/L$. (Note that for $F(x) = \frac{1}{2}\|Ax - y\|_2^2$, $\nabla F(x) = A^T(Ax - y)$ and $L = \|A^TA\|$.) When $t_k = 1/L$, $F(x_k) - F(x_\star) \le \frac{L\|x_0 - x_\star\|_2^2}{2k}$ and the number of iterations required to ensure $F(x_k) - F(x_\star) \le \epsilon$ is $k \ge \frac{L\|x_0 - x_\star\|_2^2}{2\epsilon}$ for some optimizer $x_\star$ of $F$.

##### Subgradient algorithm for nonsmooth functions

When $F$ is nonsmooth, we can use the subgradient by initializing $x_0$ and repeatedly choosing some $g_k \in \delta F(x_k)$ and setting $x_{k + 1} = x_k - t_kg_k$. In the case of L1 minimization, we have $\delta F(x) = \lambda\delta \|\cdot\|_1(x) + A^T(Ax-y)$ so $\lambda \operatorname{sign}(x) + A^T(Ax-y) \in \delta F(x)$, giving the iteration $x_{k+1} = x_k - (\lambda \operatorname{sign}(x_k) + A^t(Ax_k - y))$. However, the convergence rate is not good - if $\forall k, \|g_k\|_2 \le G$ and we choose $t_k = \frac{\|x_0 - x_\star\|_2}{\sqrt{k}\|g_k\|2}$ then $\min_{i=0, \ldots, k} F(x_i) - F(x_\star) \le \frac{G\|x_0 - x_\star\|_2}{\sqrt{k}}$ and the algorithm converges with $1/\sqrt{k}$ - which may be true in general for non-differentiable functions.

For any point z, $F(x) \le F(z) + \langle x - z, \nabla F(z)\rangle + \frac{L}{2}\|x - z\|_2^2 = Q(x, z)$, the “quadratic upper bound” for $F(x)$ - a strongly convex function. If $\hat{x}$ is the unique minimizer, $0 \in \delta_xQ(\hat{x}, z)$ and $\nabla F(z) + L(\hat{x} - z) = 0 \rightarrow \hat{x} = z - \frac{1}{L}\nabla F(z)$. So, we use the quadratic upper bound in the gradient step as $x_{k+1} = \operatorname{arg}\min_x Q(x, x_k)$ when $F$ is differentiable. For non-differential $f$, we can set $Q(x, z) = f(x) + g(x) + \langle x - z, \nabla g(x)\rangle + \frac{L}{2}\|x-z\|_2^2$. This only works when $Q(x, x_k)$ can be minimized efficiently.

For $f(x) = \|x\|_1$, dropping terms in $Q(x, z)$ constant with respect to $x$ we have $\operatorname{arg}\min_x Q(x, z) = \operatorname{arg}\min_x f(x) + \langle x, \nabla g(z)\rangle + \frac{L}{2}\|x - z\|_2^2$ $= \operatorname{arg}\min_x f(x) + \frac{L}{2}\|x - (z - \frac{1}{L}\nabla g(z))\|_2^2$. Setting $h = z - \frac{1}{L} \nabla g(z)$ we need to solve $\min_x \frac{1}{L}f(x) + \frac{1}{2}\|x - h\|_2^2$. Subdifferentiating with $f(x) = \|x\|_1$ we have a solution $x_\star$ iff $0 \in \frac{1}{L}\delta\|\cdot\|_1(x_\star) + x_\star - h \rightarrow x_\star \in h - \frac{1}{L}\delta \|\cdot \|_1(x_\star)$. Substituting in the subdifferential for the L1 norm, we have $x_\star(i) = h(i) - \frac{1}{L}$ when $h(i) > \frac{1}{L}$, $0$ when $h(i) \in $-\frac{1}{L}, \frac{1}{L}$$, and $h(i) + \frac{1}{L}$ when $h(i) < -\frac{1}{L}$. Then, setting $L = \|A^TA\|$ we update $x_{k+1}$ in the proximal gradient descent algorithm with $S_{\mu/L}(x_k - \frac{1}{L}A^T(Ax_k - y))$ where $x_\star = S_{1/L}(h)$. Each step reduces the smooth term, then applies the soft thresholding operator to decrease the L1 norm.

##### Proximity Operator

The soft thresholding is the “proximity operator” of the L1 norm. The proximal gradient algorithm applied to $F(x) = f(x) + g(x)$ is not affected by the presence of nonsmooth $f(x). If [itex]g$ is convex, differentiable, and $L$-Lipschitz, and $f$ is convex, then $F(x_k) - F(x_\star) \le \frac{L\|x_k - x_\star\|_2^2}{2k}$.

##### Bounds on first-order methods

To try to obtain a lower bound on the amount of computation needed to solve an optimization problem (can we do better than gradient?), we can study $F$ with $f(x) = 0$. A first-order method can only access $F$ through its values and gradients, and has some rule to determine when to stop, at which point it outputs an approximate solution. The number of steps required is the method's complexity, and the difference between the output at the stopping point and the true output is the accuracy. We seek a lower bound, which would assert that no first order method can guarantee to optimize all $F$ with some error $\epsilon$ in $T$ steps. It has been shown that for the set of convex, differentiable functions whose gradient is L-Lipschitz, with a minimizer $x_\star \mid \|x_\star\|_2 \le R$, the complexity is lower bounded by $O(1)\min(n, \sqrt{\frac{LR^2}{\epsilon}})$. This is better than the gradient descent method, so there may be room for improvement

The lower bound on first order methods (for smooth $F$)can be achieved by setting $x_0 = 0$, $y_0 = 0$, $t_0 = 1$ and setting $x_k + 1 = y_k - \frac{1}{L}\nabla F(y_k)$, $t_{k + 1} = \frac{1 + \sqrt{1 + 4t_k^2}}{2}$, and $y_{k + 1} = x_{k + 1} + \frac{t_{k - 1}}{t_{k + 1}}(x_{k + 1} - x_k)$. This algorithm does the gradient step on a chosen set of auxiliary points calculated according to the fastest possible convergence - they are calculated with an additional “push” in the direction of the point found. The steps in this algorithm satisfy $F(x_k) - F(x_\star) \le \frac{2LR^2}{(k+1)^2}$. For nonsmooth $F$, we replace the gradient step with a minimization of $Q(X, z) = f(x) + g(z) + \langle \nabla g(z), x - z\rangle + \frac{L}{2}\|x - z\|_2^2$. This satisfies the same lower bound as the original algorithm.