logo logo2

Prirodoslovno-matematički fakultet

Matematički odsjek

Seminar za numeričku matematiku i znan. računanje

Prijavite se:
Korisničko ime:
Lozinka:
Detalji o izabranom predavanju:
Seminar:Seminar za numeričku matematiku i znan. računanje
Naziv predavanja:Parallel implementations of Riemannian conjugate gradient methods for joint approximate diagonalization
Predavač:Nela Bosner
Vrijeme: 08.02.2024 13:15
Predavaonica:201
Tip: Originalan rad
Opis:Joint approximate diagonalization (JAD) has many applications, such as blind source separation, parameter identification in exponential sum, canonical polyadic decomposition of a tensor used in chemometrics, telecommunications, psychometrics, data mining, machine learning, $\ldots$. In our work we propose two numerical methods for computing JAD based on Riemannian optimization on two different matrix manifolds, with emphasis on their numerical properties and efficiency. Our goal is to solve efficiently the following problem: given a set of symmetric matrices $A^{(1)}$, $A^{(2)}$, \ldots , $A^{(m)}\in \mathbb{R}^{n\times n}$, find orthogonal or nonsingular $X\in \mathbb{R}^{n\times n}$ such that $X^{T}A^{(p)}X=D^{(p)}$, for all $p=1,\ldots ,m$, where $D^{(p)}$ are either diagonal, or as diagonal as possible according to some criterion. JAD is represented by minimization problem $\min_{X\in \mathcal{M}}F(X)$, where $F$ measures diagonality of transformed input matrices and $\mathcal{M}$ is appropriate manifold. Our goal was to choose an approach that offers opportunity for development of efficient parallel implementation, especially for larger dimension $n$ and large number of input matrices $m$. For that reason, we applied and modified conjugate gradient method (CG) on two matrix manifolds: the Stiefel manifold (orthogonal group), and the oblique manifold (matrices with unit column norms). We were especially focused on variants of the method with better convergence properties but with larger numerical complexity. We implemented many variants of the method with different complexities, and all tasks in our implementations are explicitly modified and parallelized. The purpose of all modifications is to decrease operation count and increase efficiency, and the forms of the objective function and the manifolds was very important in that process. Numerical experiments confirmed that our modified implementations of the conjugate gradient method on two matrix manifolds are more efficient than the original versions, in particular for large $n$ and $m$. Moreover, we give a recommendation for the most efficient variant of the algorithm for solving JAD.
Tražilica:
Naslovnica