First Order Methods¶
This module aims to implement the first order methods for sparse signal recovery problems proposed in [BCandesG11]. The implementation is adapted from TFOCS [BCG12].
First order methods exploit information n values and gradients/subgradients (but not Hessians) of the functions comprising the models under consideration [Bec17].
API¶
Solvers
|
First order methods driver routine |
|
First order solver for smooth conic dual problems driver routine |
|
Solver for l1 regulated least square problem |
|
Solver for l1 regulated least square problem |
|
Solver for LASSO problem |
|
Solver for LASSO problem |
|
Solver for ordered weighted l1 norm regulated least square problem |
|
Solver for ordered weighted l1 norm regulated least square problem |
|
Solver for the (smoothed) Dantzig selector problem using smoothed conic dual formulation |
Data types
Options for FOCS driver routine |
|
State of the FOCS method |
Utilities
|
Returns an affine function for a matrix A and vector b |
First Order Methods¶
[BCandesG11] considers problems in the unconstrained form:
where:
Both \(g, h: \RR^n \to (\RR \cup \infty)\) are convex functions.
\(g : \RR^m \to \RR\) is a smooth convex function.
\(h : \RR^n \to \RR\) is a non-smooth convex function.
Of particular interest are problems where \(g\) takes a specific form:
where
\(\AAA\) is a linear operator from \(\RR^n \to \RR^m\)
\(b\) is a translation vector
\(f : \RR^m \to \RR\) is a smooth convex function
and the function \(h\) is a prox-capable convex function (to be described later).
We can then rewrite \(\phi\) as:
For a smooth function, its gradient \(\nabla f\) must exist and be easy to compute.
For a prox-capable function, it should have an efficient proximal operator:
for any \(x \in \RR^n\) and the step size \(t > 0\).
See the following sections for details:
The routine fom()
provides the solver for the minimization
problem described above.
Unconstrained smooth minimization problems can be handled by choosing \(h(x) = 0\). See
cr.sparse.opt.prox_zero()
.Convex constraints can be handled by adding their indicator functions as part of \(h\).
For solving the minimization problem, an initial solution \(x_0\) should be provided. If one is unsure of the initial solution, they can provide \(0 \in \RR^n\) as the initial solution.
Smooth Conic Dual Problems¶
A number of sparse signal recovery problems can be cast as conic optimization problems. Then efficient first order methods can be used to solve the problem. This procedure often involves the following steps:
Cast the sparse recovery problem as a conic optimization problem
Augment the objective function with a strongly convex term to make it smooth.
Formulate the dual of the smooth problem.
Solve the smooth conic dual problem using a first order method.
Apply continuation to mitigate the effect of the strongly convex term added to the original objective function.
Our goal is to solve the conic problems of the form [BCandesG11]
where:
\(x \in \RR^n\) is the optimization variable
\(f :\RR^n \to \RR\) is a convex objective function which is possibly extended valued but not necessarily smooth
\(\AAA : \RR^n \to \RR^m\) is a linear operator
\(b \in \RR^m\) is a translation vector
\(\KKK \subseteq \RR^m\) is a closed, convex cone
Dual Problem
The Lagrangian associated with the conic problem is given by:
where \(\lambda \in \RRR^m\) is the Lagrange multiplier which must lie in the dual cone \(\KKK^*\).
The dual function is:
\(\AAA^*: \RR^m \to \RR^n\) is the adjoint of the linear operator \(\AAA\)
\(f^* : \RR^n \to (\RR \cup \infty)\) is the convex conjugate of \(f\) defined by
Thus, the dual problem is given by:
Smoothing
The dual function \(g\) may not be differentiable (or finite) on all of \(\KKK^*\).
We perturb the original problem as follows:
\(\mu > 0\) is a fixed smoothing parameter
\(d : \RR^n \to \RR\) is a strongly convex function obeying
A specific choice for \(d(x)\) is:
The new objective function \(f_{\mu}\) is strongly convex.
The Lagrangian becomes:
The dual function becomes:
First order methods may be employed to solve this problem with provable performance.
The smoothed conic dual problem is then given by:
Composite Forms
Often, it is convenient to split the dual variable \(\lambda \in \RR^m\) as \(\lambda=(z, \tau)\) where \(z \in \RR^{m-\bar{m}}\) and \(\tau \in \RR^{\bar{m}}\). For example, if \(\KKK^* = \LLL^n_1\), the \(\ell_1\) norm cone (i.e. the epigraph of the \(\ell_1\) norm function, then, if we split \(\lambda \in \RR^{n+1}\) as \(\lambda = (z, \tau)\) where \(z \in \RR^n\) and \(\tau \in \RR\), then:
This is particularly useful, if the smoothed dual function \(g_{\mu}(\lambda) = g_{\mu}(z, \tau)\) is linear in \(\tau\). Then, we can rewrite \(g_{\mu}\) as:
Since \(g_{\mu}\) is a smooth concave function, \(g_{\text{sm}}\) will be a smooth convex function.
Problem Instantiations
In the rest of the document, we will discuss how specific sparse signal recovery problems can be modeled and solved using this module as either primal problems or a smooth conic dual problem.
L1 regularized least square problem¶
We consider the problem:
Choose:
\(f(x) = \frac{1}{2}\| x \|_2^2\) see
cr.sparse.opt.smooth_quad_matrix()
\(h(x) = \| \lambda x \|_1\) see
cr.sparse.opt.prox_l1()
\(\AAA\) as the linear operator
\(-b\) as the translate input
With these choices, it is straight-forward to use fom()
to solve
the L1RLS problem. This is implemented in the function l1rls()
.
LASSO¶
LASSO (least absolute shrinkage and selection operator) s a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model.
We consider the problem:
Choose:
\(f(x) = \frac{1}{2}\| x \|_2^2\) see
cr.sparse.opt.smooth_quad_matrix()
\(h(x) = I_C(x)\) as the indicator function for l1-ball \(C = \{x : \| x \|_1 \leq \tau\}\), see
cr.sparse.opt.prox_l1_ball()
\(\AAA\) as the linear operator
\(-b\) as the translate input
With these choices, it is straight-forward to use fom()
to solve
the LASSO problem. This is implemented in the function lasso()
.
Ordered weighted L1 regularized least square problem¶
We consider the problem:
described in [lBvdBSC13].
Choose:
\(f(x) = \frac{1}{2}\| x \|_2^2\) see
cr.sparse.opt.smooth_quad_matrix()
\(h(x) = \sum_{i=1}^n \lambda_i | x |_{(i)}\) see
cr.sparse.opt.prox_owl1()
\(\AAA\) as the linear operator
\(-b\) as the translate input
With these choices, it is straight-forward to use fom()
to solve
the ordered weighted L1 regularized least square problem. This is implemented in the function owl1rls()
.
The Dantzig selector¶
We wish to recover an unknown vector \(x_0 \in \RR^n\) from the data \(y \in \RR^m\) and the model:
where
\(A\) is a known \((m, n)\) sized design matrix
\(z\) is the noise term
There are fewer observations/measurements than unknowns (\(m \lt n\))
We consider the optimization problem
We are minimizing the \(\ell_1\) norm of the solution (thus promoting sparsity)
We have an upper bound \(\delta\) on the correlation between the residual vector \(r = y - A x\) and the columns/atoms of \(A\).
This formulation is known as the Dantzig selector.
The conic form
We can rewrite the Dantzig selector problem as:
where \(\LLL^n_{\infty}\) is the epigraph of the \(\ell_{\infty}\)-norm. It is a convex cone.
Then the Dantzig selector can be modeled as a standard conic form as follows:
\(f(x) = \| x \|_1\)
\(\AAA(x) = (-A^* A x, 0)\) is a mapping from \(\RR^n \to \RR^{n+1}\)
\(b = (A^* y, \delta)\); note that \(b \in \RR^{n+1}\)
\(\KKK = \LLL^n_{\infty}\)
Note carefully that:
Thus:
Dual problem
The dual of the \(\LLL^n_{\infty}\) cone is the \(\LLL^n_{1}\) cone. The dual variable \(\lambda\) will lie in the dual cone \(\LLL^n_{1}\). It will be easier to work with defining \(\lambda = (z, \tau)\) such that
The convex conjugate of the l1 norm function \(f(x) = \| x \|_1\) is the indicator function for the \(\ell_{\infty}\) norm ball.
The adjoint of the linear operator is given by:
Plugging into the dual problem definition:
We get:
For \(\delta > 0\), the solution \(\lambda\) will be on the boundary of the convex cone \(\LLL^n_1\), giving us:
Plugging this back, we get the unconstrained maximization problem: