Linear Convergence of First-Order Methods for Nonconvex Regularized Optimization Based on Error Bound-Type Conditions
tarafından
 
Zhang, Qi, author.

Başlık
Linear Convergence of First-Order Methods for Nonconvex Regularized Optimization Based on Error Bound-Type Conditions

Yazar
Zhang, Qi, author.

ISBN
9780438147973

Yazar Ek Girişi
Zhang, Qi, author.

Fiziksel Tanımlama
1 electronic resource (121 pages)

Genel Not
Source: Dissertation Abstracts International, Volume: 79-11(E), Section: B.
 
Advisors: Man Cho So.

Özet
With the increasing demand of information and technology, researchers have been paid much attention to numerical algorithms for solving practical optimization problems with large size of datasets. One of the key elements to evaluate the performance of these iterative algorithms, is the convergence rate of those generated iterates towards an optimal solution. In the early existing results, many first-order methods can achieve a sub-linear rate for general smooth and convex functions, and a linear rate if strong convexity is assumed. However, this may be too rigid for many prevalent applications, like the most well-known Lasso, Group-Lasso and even the nonconvex matrix factorization problems.
 
Actually, according to the numerical experiments, most first-order methods still exhibit linear convergence without strong convexity. Inspired by this delighted phenomenon, people began to investigate more about the problem structure, trying to find a theoretical explanation for this faster rate. Subsequently, various of so-called error bound-type conditions have been raised as working tools for convergence analysis, including error bound and Lojasiewicz inequality. They can he regarded as relaxed definitions of strong convexity. Once these conditions have been verified for specific structured problems, kinds of first-order algorithms can be proved to attain the linear rate of convergence.
 
Yet, checking error bound-type conditions usually involves sophisticated mechanism. In this thesis, we consider two interesting applications of regularized optimization problems. The first case is the ℓ'1, p-norm regularizer, which induces sparsity in machine learning, signal error processing and many other popular domains. We identify that the error bound condition will hold when p ∈ [1, 2] or +infinity, and thus many first-order algorithms, like the Proximal Gradient (PG) method and the Block Coordinate Gradient Descent (BCGD) method, can converge linearly when the iterates fall close enough to the optimal set. Otherwise, for p ∈ (2, +infinity), we construct a counter-example when error bound fails.
 
The other case is the prevailing matrix factorization problems with different regularizers for inducing desired structure. A nonconvex Alternating Direction Method of Multipliers (ADMM) is applied and is demonstrated to achieve local linear convergence to the global optimum under the Lojasiewicz inequality assumption. As illustration, several specific regularizers are discussed and are verified to satisfy the Lojasiewicz inequality. To be complete numerical experiments are shown to support these theoretical results for both cases.

Notlar
School code: 1307

Konu Başlığı
Operations research.
 
Industrial engineering.
 
Mathematics.

Tüzel Kişi Ek Girişi
The Chinese University of Hong Kong (Hong Kong).

Elektronik Erişim
http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:10904548


Yer NumarasıDemirbaş NumarasıShelf LocationShelf LocationHolding Information
XX(697043.1)697043-1001Proquest E-Tez KoleksiyonuProquest E-Tez Koleksiyonu