Global Tangent Spaces

In GeometricMachineLearning standard neural network optimizers are generalized to homogeneous spaces by leveraging the special structure of the tangent spaces of this class of manifolds. When we introduced homogeneous spaces we already talked about that every tangent space to a homogeneous space $T_Y\mathcal{M}$ is of the form:

\[ T_Y\mathcal{M} = \mathfrak{g} \cdot Y := \{AY: A\in{}\mathfrak{g}\}.\]

We then have a decomposition of $\mathfrak{g}$ into a vertical part $\mathfrak{g}^{\mathrm{ver}, Y}$ and a horizontal part $\mathfrak{g}^{\mathrm{hor}, Y}$ and the horizontal part is isomorphic to $T_Y\mathcal{M}$.

We now identify a special element $E \in \mathcal{M}$ and designate the horizontal component $\mathfrak{g}^{\mathrm{hor}, E}$ as our global tangent space. We will refer to this global tangent space by $\mathfrak{g}^\mathrm{hor}$. We can now find a transformation from any $\mathfrak{g}^{\mathrm{hor}, Y}$ to $\mathfrak{g}^\mathrm{hor}$ and vice-versa (these spaces are isomorphic).

Theorem

Let $A\in{}G$ an element such that $AE = Y$. Then we have

\[A^{-1}\cdot\mathfrak{g}^{\mathrm{hor},Y}\cdot{}A = \mathfrak{g}^\mathrm{hor},\]

i.e. for every element $B\in\mathfrak{g}^\mathrm{hor}$ we can find a $B^Y \in \mathfrak{g}^{\mathrm{hor},Y}$ s.t. $B = A^{-1}B^YA$ (and vice-versa).

Proof

We first show that for every $B^Y\in\mathfrak{g}^{\mathrm{hor},Y}$ the element $A^{-1}B^YA$ is in $\mathfrak{g}^{\mathrm{hor}}$. First not that $A^{-1}B^YA\in\mathfrak{g}$ by a fundamental theorem of Lie group theory (closedness of the Lie algebra under adjoint action). Now assume that $A^{-1}B^YA$ is not fully contained in $\mathfrak{g}^\mathrm{hor}$, i.e. it also has a vertical component. So we would lose information when performing $A^{-1}B^YA \mapsto A^{-1}B^YAE = A^{-1}B^YY$, but this contradicts the fact that $B^Y\in\mathfrak{g}^{\mathrm{hor},Y}.$ We now have to proof that for every $B\in\mathfrak{g}^\mathrm{hor}$ we can find an element in $\mathfrak{g}^{\mathrm{hor}, Y}$ such that this element is mapped to $B$. By a argument similar to the one above we can show that $ABA^{-1}\in\mathfrak{g}^\mathrm{hor, Y}$ and this element maps to $B$. Proofing that the map is injective is now trivial.

We should note that we have written all Lie group and Lie algebra actions as simple matrix multiplications, like $AE = Y$. For some Lie groups and Lie algebras we should use different notations [9]. These Lie groups are however not relevant for what we use in GeometricMachineLearning and we will stick to regular matrix notation.

Global Sections

Note that the theorem above requires us to find an element $A\in{}G$ such that $AE = Y$. If we can find a mapping $\lambda:\mathcal{M}\to{}G$ we call such a mapping a global section.

Definition

We call a mapping from $\lambda:\mathcal{M} \to G$ a homogeneous space to its associated Lie group a global section if it satisfies:

\[\lambda(Y)E = Y,\]

where $E$ is the distinct element of the homogeneous space.

Note that in general global sections are not unique because the rank of $G$ is in general greater than that of $\mathcal{M}$. We give an example of how to construct such a global section for the Stiefel and the Grassmann manifolds below.

The Global Tangent Space for the Stiefel Manifold

We now discuss the specific form of the global tangent space for the Stiefel manifold. We choose the distinct element[1] $E$ to have an especially simple form (this matrix can be build by calling StiefelProjection):

\[E = \begin{bmatrix} \mathbb{I}_n \\ \mathbb{O} \end{bmatrix}\in{}St(n, N).\]

Based on this elements of the vector space $\mathfrak{g}^{\mathrm{hor}, E} =: \mathfrak{g}^{\mathrm{hor}}$ are:

\[\begin{pmatrix} A & B^T \\ B & \mathbb{O} \end{pmatrix},\]

where $A$ is a skew-symmetric matrix of size $n\times{}n$ and $B$ is an arbitrary matrix of size $(N - n)\times{}n$.

Arrays of type $\mathfrak{g}^{\mathrm{hor}, E}$ are implemented in GeometricMachineLearning under the name StiefelLieAlgHorMatrix.

We can call this with e.g. a skew-symmetric matrix $A$ and an arbitrary matrix $B$:

N, n = 10, 4

A = rand(SkewSymMatrix, n)
4×4 SkewSymMatrix{Float64, Vector{Float64}}:
 0.0       -0.311448  -0.121148   -0.38669
 0.311448   0.0       -0.20453    -0.018572
 0.121148   0.20453    0.0        -0.0721807
 0.38669    0.018572   0.0721807   0.0
B = rand(N - n, n)
6×4 Matrix{Float64}:
 0.121813   0.160222  0.540529  0.492297
 0.137766   0.904766  0.546787  0.100781
 0.0508083  0.991961  0.783961  0.878875
 0.38644    0.740841  0.813172  0.959667
 0.846458   0.229042  0.83098   0.634652
 0.835334   0.96917   0.656091  0.0648724
B1 = StiefelLieAlgHorMatrix(A, B, N, n)
10×10 StiefelLieAlgHorMatrix{Float64, SkewSymMatrix{Float64, Vector{Float64}}, Matrix{Float64}}:
 0.0        -0.311448  -0.121148   …  -0.38644   -0.846458  -0.835334
 0.311448    0.0       -0.20453       -0.740841  -0.229042  -0.96917
 0.121148    0.20453    0.0           -0.813172  -0.83098   -0.656091
 0.38669     0.018572   0.0721807     -0.959667  -0.634652  -0.0648724
 0.121813    0.160222   0.540529       0.0        0.0        0.0
 0.137766    0.904766   0.546787   …   0.0        0.0        0.0
 0.0508083   0.991961   0.783961       0.0        0.0        0.0
 0.38644     0.740841   0.813172       0.0        0.0        0.0
 0.846458    0.229042   0.83098        0.0        0.0        0.0
 0.835334    0.96917    0.656091       0.0        0.0        0.0

We can also call it with a matrix of shape $N\times{}N$:

B2 = Matrix(B1) # note that this does not have any special structure

StiefelLieAlgHorMatrix(B2, n)
10×10 StiefelLieAlgHorMatrix{Float64, SkewSymMatrix{Float64, Vector{Float64}}, SubArray{Float64, 2, Matrix{Float64}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}}:
 0.0        -0.311448  -0.121148   …  -0.38644   -0.846458  -0.835334
 0.311448    0.0       -0.20453       -0.740841  -0.229042  -0.96917
 0.121148    0.20453    0.0           -0.813172  -0.83098   -0.656091
 0.38669     0.018572   0.0721807     -0.959667  -0.634652  -0.0648724
 0.121813    0.160222   0.540529       0.0        0.0        0.0
 0.137766    0.904766   0.546787   …   0.0        0.0        0.0
 0.0508083   0.991961   0.783961       0.0        0.0        0.0
 0.38644     0.740841   0.813172       0.0        0.0        0.0
 0.846458    0.229042   0.83098        0.0        0.0        0.0
 0.835334    0.96917    0.656091       0.0        0.0        0.0

Or we can call it a matrix of shape $N\times{}n$:

E = StiefelProjection(N, n)
10×4 StiefelProjection{Float64, Matrix{Float64}}:
 1.0  0.0  0.0  0.0
 0.0  1.0  0.0  0.0
 0.0  0.0  1.0  0.0
 0.0  0.0  0.0  1.0
 0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0
B3 = B1 * E

StiefelLieAlgHorMatrix(B3, n)
10×10 StiefelLieAlgHorMatrix{Float64, SkewSymMatrix{Float64, Vector{Float64}}, SubArray{Float64, 2, Matrix{Float64}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}}:
 0.0        -0.311448  -0.121148   …  -0.38644   -0.846458  -0.835334
 0.311448    0.0       -0.20453       -0.740841  -0.229042  -0.96917
 0.121148    0.20453    0.0           -0.813172  -0.83098   -0.656091
 0.38669     0.018572   0.0721807     -0.959667  -0.634652  -0.0648724
 0.121813    0.160222   0.540529       0.0        0.0        0.0
 0.137766    0.904766   0.546787   …   0.0        0.0        0.0
 0.0508083   0.991961   0.783961       0.0        0.0        0.0
 0.38644     0.740841   0.813172       0.0        0.0        0.0
 0.846458    0.229042   0.83098        0.0        0.0        0.0
 0.835334    0.96917    0.656091       0.0        0.0        0.0

We now demonstrate how to map from an element of $\mathfrak{g}^{\mathrm{hor}, Y}$ to an element of $\mathfrak{g}^\mathrm{hor}$:

N, n = 10, 5

Y = rand(StiefelManifold, N, n)
Δ = rgrad(Y, rand(N, n))
ΩΔ = GeometricMachineLearning.Ω(Y, Δ)
λY = GlobalSection(Y)

λY_mat = Matrix(λY)

round.(λY_mat' * ΩΔ * λY_mat; digits = 3)
10×10 Matrix{Float64}:
  0.0    -0.611   0.421   1.362   1.367  …   0.974   0.115  -0.094   0.587
  0.611   0.0     0.172   1.119   1.055      1.052  -0.054  -0.512   0.904
 -0.421  -0.172   0.0     1.215   0.306      0.84    0.297  -0.112   0.641
 -1.362  -1.119  -1.215  -0.0    -0.427      0.436   0.144  -0.656   0.717
 -1.367  -1.055  -0.306   0.427  -0.0        1.221   0.133  -0.508   0.669
 -0.471  -0.92   -0.318  -0.331  -0.405  …   0.0    -0.0     0.0    -0.0
 -0.974  -1.052  -0.84   -0.436  -1.221      0.0    -0.0     0.0    -0.0
 -0.115   0.054  -0.297  -0.144  -0.133      0.0    -0.0    -0.0     0.0
  0.094   0.512   0.112   0.656   0.508     -0.0     0.0    -0.0    -0.0
 -0.587  -0.904  -0.641  -0.717  -0.669      0.0    -0.0     0.0     0.0

Performing this computation directly is computationally very inefficient however and the user is strongly discouraged to call Matrix on an instance of GlobalSection. The better option is calling global_rep:

_round(global_rep(λY, Δ); digits = 3)
10×10 StiefelLieAlgHorMatrix{Float64, SkewSymMatrix{Float64, Vector{Float64}}, Matrix{Float64}}:
  0.0    -0.611   0.421   1.362   1.367  0.471  0.974   0.115  -0.094  0.587
  0.611   0.0     0.172   1.119   1.055  0.92   1.052  -0.054  -0.512  0.904
 -0.421  -0.172   0.0     1.215   0.306  0.318  0.84    0.297  -0.112  0.641
 -1.362  -1.119  -1.215   0.0    -0.427  0.331  0.436   0.144  -0.656  0.717
 -1.367  -1.055  -0.306   0.427   0.0    0.405  1.221   0.133  -0.508  0.669
 -0.471  -0.92   -0.318  -0.331  -0.405  0.0    0.0     0.0     0.0    0.0
 -0.974  -1.052  -0.84   -0.436  -1.221  0.0    0.0     0.0     0.0    0.0
 -0.115   0.054  -0.297  -0.144  -0.133  0.0    0.0     0.0     0.0    0.0
  0.094   0.512   0.112   0.656   0.508  0.0    0.0     0.0     0.0    0.0
 -0.587  -0.904  -0.641  -0.717  -0.669  0.0    0.0     0.0     0.0    0.0

Internally GlobalSection calls the function GeometricMachineLearning.global_section which does the following for the Stiefel manifold:

A = randn(N, N - n) # or the gpu equivalent
A = A - Y * (Y' * A)
Y⟂ = qr(A).Q[1:N, 1:(N - n)]

So we draw $(N - n)$ new columns randomly, subtract the part that is spanned by the columns of $Y$ and then perform a $QR$ composition on the resulting matrix. The $Q$ part of the decomposition is a matrix of $(N - n)$ columns that is orthogonal to $Y$ and is typically referred to as $Y_\perp$ [6, 10, 11]. We can easily check that this $Y_\perp$ is indeed orthogonal to $Y$.

Theorem

The matrix $Y_\perp$ constructed with the above algorithm satisfies

\[Y^TY_\perp = \mathbb{O},\]

and

\[(Y_\perp)^TY_\perp = \mathbb{I},\]

i.e. all the columns in the big matrix $[Y, Y_\perp]\in\mathbb{R}^{N\times{}N}$ are mutually orthonormal and it therefore is an element of $SO(N)$.

Proof

The second property is trivially satisfied because the $Q$ component of a $QR$ decomposition is an orthogonal matrix. For the first property note that $Y^TQR = \mathbb{O}$ is zero because we have subtracted the $Y$ component from the matrix $QR$. The matrix $R\in\mathbb{R}^{N\times{}(N-n)}$ further has the property $[R]_{ij} = 0$ for $i > j$ and we have that

\[(Y^TQ)R = [r_{11}(Y^TQ)_{1\bullet}, r_{12}(Y^TQ)_{1\bullet} + r_{22}(Y^TQ)_{2\bullet}, \ldots, \sum_{i=1}^{N-n}r_{i(N-n)}(Y^TQ)_{i\bullet}].\]

Now all the coefficients $r_{ii}$ are non-zero because the matrix we performed the $QR$ decomposition on has full rank and we can see that if $(Y^TQ)R$ is zero $Y^TQ$ also has to be zero.

We now discuss the global tangent space for the Grassmann manifold. This is similar to the Stiefel case.

Global Tangent Space for the Grassmann Manifold

In the case of the Grassmann manifold we construct the global tangent space with respect to the distinct element $\mathcal{E}=\mathrm{span}(E)\in{}Gr(n,N)$, where $E$ is again the same matrix.

The tangent tangent space $T_\mathcal{E}Gr(n,N)$ can be represented through matrices:

\[\begin{pmatrix} 0 & \cdots & 0 \\ \cdots & \cdots & \cdots \\ 0 & \cdots & 0 \\ b_{11} & \cdots & b_{1n} \\ \cdots & \cdots & \cdots \\ b_{(N-n)1} & \cdots & b_{(N-n)n} \end{pmatrix}.\]

This representation is based on the identification $T_\mathcal{E}Gr(n,N)\to{}T_E\mathcal{S}_E$ that was discussed in the section on the Grassmann manifold[2]. We use the following notation:

\[\mathfrak{g}^\mathrm{hor} = \mathfrak{g}^{\mathrm{hor},\mathcal{E}} = \left\{\begin{pmatrix} 0 & -B^T \\ B & 0 \end{pmatrix}: \text{$B$ arbitrary}\right\}.\]

This is equivalent to the horizontal component of $\mathfrak{g}$ for the Stiefel manifold for the case when $A$ is zero. This is a reflection of the rotational invariance of the Grassmann manifold: the skew-symmetric matrices $A$ are connected to the group of rotations $O(n)$ which is factored out in the Grassmann manifold $Gr(n,N)\simeq{}St(n,N)/O(n)$. In GeometricMachineLearning we thus treat the Grassmann manifold as being embedded in the Stiefel manifold. In [11] viewing the Grassmann manifold as a quotient space of the Stiefel manifold is important for "feasibility" in "practical computations".

Library Functions

GeometricMachineLearning.StiefelLieAlgHorMatrixType
StiefelLieAlgHorMatrix(A::SkewSymMatrix{T}, B::AbstractMatrix{T}, N::Integer, n::Integer) where T

Build an instance of StiefelLieAlgHorMatrix based on a skew-symmetric matrix A and an arbitrary matrix B.

StiefelLieAlgHorMatrix is the horizontal component of the Lie algebra of skew-symmetric matrices (with respect to the canonical metric). The projection here is: $\pi:S \to SE$ where

\[E = \begin{pmatrix} \mathbb{I}_{n} \\ \mathbb{O}_{(N-n)\times{}n} \end{pmatrix}.\]

The matrix $E$ is implemented under StiefelProjection in GeometricMachineLearning.

An element of StiefelLieAlgMatrix takes the form:

\[\begin{pmatrix} A & B^T \\ B & \mathbb{O} \end{pmatrix},\]

where $A$ is skew-symmetric (this is SkewSymMatrix in GeometricMachineLearning).

Also see GrassmannLieAlgHorMatrix.

source
GeometricMachineLearning.StiefelLieAlgHorMatrixMethod
StiefelLieAlgHorMatrix(D::AbstractMatrix, n::Integer)

Take a big matrix as input and build an instance of StiefelLieAlgHorMatrix belonging to the StiefelManifold $St(n, N)$ where $N$ is the number of rows of D.

If the constructor is called with a big $N\times{}N$ matrix, then the projection is performed the following way:

\[\begin{pmatrix} A & B_1 \\ B_2 & D \end{pmatrix} \mapsto \begin{pmatrix} \mathrm{skew}(A) & -B_2^T \\ B_2 & \mathbb{O} \end{pmatrix}.\]

The operation $\mathrm{skew}:\mathbb{R}^{n\times{}n}\to\mathcal{S}_\mathrm{skew}(n)$ is the skew-symmetrization operation. This is equivalent to calling of SkewSymMatrix with an $n\times{}n$ matrix.

This can also be seen as the operation:

\[D \mapsto \Omega(E, DE) = \mathrm{skew}\left(2 \left(\mathbb{I} - \frac{1}{2} E E^T \right) DE E^T\right).\]

Also see GeometricMachineLearning.Ω.

source
GeometricMachineLearning.GrassmannLieAlgHorMatrixType
GrassmannLieAlgHorMatrix(B::AbstractMatrix{T}, N::Integer, n::Integer) where T

Build an instance of GrassmannLieAlgHorMatrix based on an arbitrary matrix B of size $(N-n)\times{}n$.

GrassmannLieAlgHorMatrix is the horizontal component of the Lie algebra of skew-symmetric matrices (with respect to the canonical metric). The projection here is: $\pi:S \to SE/\sim$ where

\[E = \begin{pmatrix} \mathbb{I}_{n} \\ \mathbb{O}_{(N-n)\times{}n} \end{pmatrix},\]

and the equivalence relation is

\[V_1 \sim V_2 \iff \exists A\in\mathcal{S}_\mathrm{skew}(n) \text{such that } V_2 = V_1 + \begin{pmatrix} A \\ \mathbb{O} \end{pmatrix}\]

An element of GrassmannLieAlgMatrix takes the form:

\[\begin{pmatrix} \bar{\mathbb{O}} & B^T \\ B & \mathbb{O} \end{pmatrix},\]

where $\bar{\mathbb{O}}\in\mathbb{R}^{n\times{}n}$ and $\mathbb{O}\in\mathbb{R}^{(N - n)\times{}n}.$

source
GeometricMachineLearning.GrassmannLieAlgHorMatrixMethod
GrassmannLieAlgHorMatrix(D::AbstractMatrix, n::Integer)

Take a big matrix as input and build an instance of GrassmannLieAlgHorMatrix belonging to the GrassmannManifold $Gr(n, N)$ where $N$ is the number of rows of D.

If the constructor is called with a big $N\times{}N$ matrix, then the projection is performed the following way:

\[\begin{pmatrix} A & B_1 \\ B_2 & D \end{pmatrix} \mapsto \begin{pmatrix} \bar{\mathbb{O}} & -B_2^T \\ B_2 & \mathbb{O} \end{pmatrix}.\]

This can also be seen as the operation:

\[D \mapsto \Omega(E, DE - EE^TDE),\]

where $\Omega$ is the horizontal lift GeometricMachineLearning.Ω.

source
GeometricMachineLearning.GlobalSectionType
GlobalSection(Y::AbstractMatrix)

Construct a global section for Y.

A global section $\lambda$ is a mapping from a homogeneous space $\mathcal{M}$ to the corresponding Lie group $G$ such that

\[\lambda(Y)E = Y,\]

Also see apply_section and global_rep.

Implementation

For an implementation of GlobalSection for a custom array (especially manifolds), the function global_section has to be generalized.

source
GeometricMachineLearning.global_sectionFunction
global_section(Y::StiefelManifold)

Compute a matrix of size $N\times(N-n)$ whose columns are orthogonal to the columns in Y.

This matrix is also called $Y_\perp$ [6, 10, 11].

Examples

using GeometricMachineLearning
using GeometricMachineLearning: global_section
import Random

Random.seed!(123)

Y = StiefelManifold([1. 0.; 0. 1.; 0. 0.; 0. 0.])

round.(Matrix(global_section(Y)); digits = 3)

# output

4×2 Matrix{Float64}:
 0.0    -0.0
 0.0     0.0
 0.936  -0.353
 0.353   0.936

Further note that we convert the QRCompactWYQ object to a Matrix before we display it.

Implementation

The implementation is done with a QR decomposition (LinearAlgebra.qr!). Internally we do:

A = randn(N, N - n) # or the gpu equivalent
A = A - Y.A * (Y.A' * A)
qr!(A).Q
source
global_section(Y::GrassmannManifold)

Compute a matrix of size $N\times(N-n)$ whose columns are orthogonal to the columns in Y.

The method global_section for the Grassmann manifold is equivalent to that for the StiefelManifold (we represent the Grassmann manifold as an embedding in the Stiefel manifold).

See the documentation for global_section(Y::StiefelManifold{T}) where T.

source
GeometricMachineLearning.global_repFunction
global_rep(λY::GlobalSection{T, AT}, Δ::AbstractMatrix{T}) where {T, AT<:StiefelManifold{T}}

Express Δ (an the tangent space of Y) as an instance of StiefelLieAlgHorMatrix.

This maps an element from $T_Y\mathcal{M}$ to an element of $\mathfrak{g}^\mathrm{hor}$.

These two spaces are isomorphic where the isomorphism where the isomorphism is established through $\lambda(Y)\in{}G$ via:

\[T_Y\mathcal{M} \to \mathfrak{g}^{\mathrm{hor}}, \Delta \mapsto \lambda(Y)^{-1}\Omega(Y, \Delta)\lambda(Y).\]

Also see GeometricMachineLearning.Ω.

Examples

using GeometricMachineLearning
using GeometricMachineLearning: _round
import Random 

Random.seed!(123)

Y = rand(StiefelManifold, 6, 3)
Δ = rgrad(Y, randn(6, 3))
λY = GlobalSection(Y)

_round(global_rep(λY, Δ); digits = 3)

# output

6×6 StiefelLieAlgHorMatrix{Float64, SkewSymMatrix{Float64, Vector{Float64}}, Matrix{Float64}}:
  0.0     0.679   1.925   0.981  -2.058   0.4
 -0.679   0.0     0.298  -0.424   0.733  -0.919
 -1.925  -0.298   0.0    -1.815   1.409   1.085
 -0.981   0.424   1.815   0.0     0.0     0.0
  2.058  -0.733  -1.409   0.0     0.0     0.0
 -0.4     0.919  -1.085   0.0     0.0     0.0

Implementation

The function global_rep does in fact not perform the entire map $\lambda(Y)^{-1}\Omega(Y, \Delta)\lambda(Y)$ but only

\[\Delta \mapsto \mathrm{skew}(Y^T\Delta),\]

to get the small skew-symmetric matrix and

\[\Delta \mapsto (\lambda(Y)_{[1:N, n:N]}^T \Delta)_{[1:(N-n), 1:n]},\]

for the arbitrary matrix.

source
global_rep(λY::GlobalSection{T, AT}, Δ::AbstractMatrix{T}) where {T, AT<:GrassmannManifold{T}}

Express Δ (an the tangent space of Y) as an instance of GrassmannLieAlgHorMatrix.

The method global_rep for GrassmannManifold is similar to that for StiefelManifold.

Examples

using GeometricMachineLearning
using GeometricMachineLearning: _round
import Random 

Random.seed!(123)

Y = rand(GrassmannManifold, 6, 3)
Δ = rgrad(Y, randn(6, 3))
λY = GlobalSection(Y)

_round(global_rep(λY, Δ); digits = 3)

# output

6×6 GrassmannLieAlgHorMatrix{Float64, Matrix{Float64}}:
  0.0     0.0     0.0     0.981  -2.058   0.4
  0.0     0.0     0.0    -0.424   0.733  -0.919
  0.0     0.0     0.0    -1.815   1.409   1.085
 -0.981   0.424   1.815   0.0     0.0     0.0
  2.058  -0.733  -1.409   0.0     0.0     0.0
 -0.4     0.919  -1.085   0.0     0.0     0.0
source

References

[6]
P.-A. Absil, R. Mahony and R. Sepulchre. Riemannian geometry of Grassmann manifolds with a view on algorithmic computation. Acta Applicandae Mathematica 80, 199–220 (2004).
[10]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization algorithms on matrix manifolds (Princeton University Press, Princeton, New Jersey, 2008).
[11]
T. Bendokat, R. Zimmermann and P.-A. Absil. A Grassmann manifold handbook: Basic geometry and computational aspects, arXiv preprint arXiv:2011.13699 (2020).
[48]
B. Brantner. Generalizing Adam To Manifolds For Efficiently Training Transformers, arXiv preprint arXiv:2305.16901 (2023).
  • 1We already introduced this special matrix together with the Stiefel manifold.
  • 2We derived the following expression for the Riemannian gradient of the Grassmann manifold: $\mathrm{grad}_\mathcal{Y}^{Gr}L = \nabla_Y{}L - YY^T\nabla_YL$. The tangent space to the element $\mathcal{E}$ can thus be written as $\bar{B} - EE^T\bar{B}$ where $B\in\mathbb{R}^{N\times{}n}$ and the matrices in this tangent space have the desired form.