Eylem Seç
Structured Low-rank Matrix Approximation in Signal Processing: Semidefinite Formulations and Entropic First-order Methods
Başlık:
Structured Low-rank Matrix Approximation in Signal Processing: Semidefinite Formulations and Entropic First-order Methods
Yazar:
Chao, Hsiao-Han, author.
ISBN:
9780438068841
Yazar Ek Girişi:
Fiziksel Tanımlama:
1 electronic resource (151 pages)
Genel Not:
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Advisors: Lieven Vandenberghe Committee members: Tetsuya Iwasaki; Paulo Tabuada; Kung Yao.
Özet:
Applications of semidefinite optimization in signal processing are often derived from the Kalman--Yakubovich--Popov lemma and its extensions, which give sum-of-squares theorems of nonnegative trigonometric polynomials and generalized polynomials. The dual semidefinite programs involve optimization over positive semidefinite matrices with Toeplitz structure or extensions of the Toeplitz structure. In recent applications, these techniques have been used in continuous-domain sparse signal approximations. These applications are commonly referred to as super-resolution, gridless compressed sensing, continuous 1-norm, or total-variation norm minimization. The semidefinited formulations of these problems introduce a large number of auxiliary variables and are expensive to solve using general-purpose or even customized interior-point solvers.
The thesis can be divided into two parts. As a first contribution, we extend the semidefinite penalty formulations in super-resolution applications to more general types of structured low-rank matrix approximations. The penalty functions for structured symmetric and nonsymmetric matrices are discussed. The connection via duality between these penalty functions and the (generalized) Kalman--Yakubovich--Popov lemma from linear system theory is further clarified, which leads to a more systematic proof for the equivalent semidefinite formulations. In the second part of the thesis, we propose a new class of efficient first-order splitting methods based on an appropriate choice of a generalized distance function, the Itakura--Saito distance, for optimizations over the cone of nonnegative trigonometric polynomials. The Itakura--Saito distance is the Bregman distance defined by the negative entropy function. The choice for this distance function is motivated by the fact that the associated generalized projection on the set of normalized nonnegative trigonometric polynomials can be computed at a cost that is roughly quadratic in the degree of the polynomial. This should be compared to the cubic per-iteration-complexity of standard first-order methods (the cost of a Euclidean projection on the positive semidefinite cone) and customized interior-point solvers. The quadratic complexity is confirmed by numerical experiments with Auslender and Teboulle's accelerated proximal gradient method for Bregman distances.
Notlar:
School code: 0031
Tüzel Kişi Ek Girişi:
Mevcut:*
Yer Numarası | Demirbaş Numarası | Shelf Location | Lokasyon / Statüsü / İade Tarihi |
---|---|---|---|
XX(682812.1) | 682812-1001 | Proquest E-Tez Koleksiyonu | Arıyor... |
On Order
Liste seç
Bunu varsayılan liste yap.
Öğeler başarıyla eklendi
Öğeler eklenirken hata oldu. Lütfen tekrar deneyiniz.
:
Select An Item
Data usage warning: You will receive one text message for each title you selected.
Standard text messaging rates apply.