
Eylem Seç

Linear Convergence of First-Order Methods for Nonconvex Regularized Optimization Based on Error Bound-Type Conditions
Başlık:
Linear Convergence of First-Order Methods for Nonconvex Regularized Optimization Based on Error Bound-Type Conditions
Yazar:
Zhang, Qi, author.
ISBN:
9780438147973
Yazar Ek Girişi:
Fiziksel Tanımlama:
1 electronic resource (121 pages)
Genel Not:
Source: Dissertation Abstracts International, Volume: 79-11(E), Section: B.
Advisors: Man Cho So.
Özet:
With the increasing demand of information and technology, researchers have been paid much attention to numerical algorithms for solving practical optimization problems with large size of datasets. One of the key elements to evaluate the performance of these iterative algorithms, is the convergence rate of those generated iterates towards an optimal solution. In the early existing results, many first-order methods can achieve a sub-linear rate for general smooth and convex functions, and a linear rate if strong convexity is assumed. However, this may be too rigid for many prevalent applications, like the most well-known Lasso, Group-Lasso and even the nonconvex matrix factorization problems.
Actually, according to the numerical experiments, most first-order methods still exhibit linear convergence without strong convexity. Inspired by this delighted phenomenon, people began to investigate more about the problem structure, trying to find a theoretical explanation for this faster rate. Subsequently, various of so-called error bound-type conditions have been raised as working tools for convergence analysis, including error bound and Lojasiewicz inequality. They can he regarded as relaxed definitions of strong convexity. Once these conditions have been verified for specific structured problems, kinds of first-order algorithms can be proved to attain the linear rate of convergence.
Yet, checking error bound-type conditions usually involves sophisticated mechanism. In this thesis, we consider two interesting applications of regularized optimization problems. The first case is the ℓ'1, p-norm regularizer, which induces sparsity in machine learning, signal error processing and many other popular domains. We identify that the error bound condition will hold when p ∈ [1, 2] or +infinity, and thus many first-order algorithms, like the Proximal Gradient (PG) method and the Block Coordinate Gradient Descent (BCGD) method, can converge linearly when the iterates fall close enough to the optimal set. Otherwise, for p ∈ (2, +infinity), we construct a counter-example when error bound fails.
The other case is the prevailing matrix factorization problems with different regularizers for inducing desired structure. A nonconvex Alternating Direction Method of Multipliers (ADMM) is applied and is demonstrated to achieve local linear convergence to the global optimum under the Lojasiewicz inequality assumption. As illustration, several specific regularizers are discussed and are verified to satisfy the Lojasiewicz inequality. To be complete numerical experiments are shown to support these theoretical results for both cases.
Notlar:
School code: 1307
Tüzel Kişi Ek Girişi:
Mevcut:*
Yer Numarası | Demirbaş Numarası | Shelf Location | Lokasyon / Statüsü / İade Tarihi |
|---|---|---|---|
| XX(697043.1) | 697043-1001 | Proquest E-Tez Koleksiyonu | Arıyor... |
On Order
Liste seç
Bunu varsayılan liste yap.
Öğeler başarıyla eklendi
Öğeler eklenirken hata oldu. Lütfen tekrar deneyiniz.
:
Select An Item
Data usage warning: You will receive one text message for each title you selected.
Standard text messaging rates apply.


