PSTAT 234 (Fall 2025)
University of California, Santa Barbara













































Blind source separation problem
\(X\): Data matrix of size \(\mathbb{R}^{n\times p}\)
Independent Components Analysis (ICA): $ X = W Y $
Independent components matrix \(W\) (hopefully) represents underlying signals
Matrix \(Y\) contain mixing coefficients
Hypothetical Simulation Data
Partial ICA and PCA Results
Identifiability
Eigenfaces example data (Brunton and Kutz 2019)
SVD forms (Brunton and Kutz 2019)
\[ \begin{aligned} X_\text{tr} \approx \hat X_\text{tr} &= U_\text{tr} \Sigma_\text{tr} V_\text{tr}^* = U_\text{tr} W_\text{tr}^* \\ X_\text{ts} \stackrel{\text{?}}{\approx} \hat X_\text{ts} &= U_\text{tr} (U_\text{tr}^* X_\text{ts}) \\ \end{aligned} \] where \(U\), \(V\), and \(\Sigma\) are from SVD (hat or tilde variations), and \(W = \Sigma V^*\).
Eigenfaces and SVD (Brunton and Kutz 2019)
Face test image reconstruction (Brunton and Kutz 2019)
Dog test image reconstruction (Brunton and Kutz 2019)
Cup test image reconstruction (Brunton and Kutz 2019)
SVD is related to eigenvalue problem involving \(\boldsymbol{X X}^*\) and \(\boldsymbol{X}^* \boldsymbol{X}\):
\[ \begin{aligned} \boldsymbol{X X}^*&=\boldsymbol{U}\left[\begin{array}{c} \hat{\boldsymbol{\Sigma}} \\ \boldsymbol{0} \end{array}\right] \boldsymbol{V}^* \boldsymbol{V}\left[\begin{array}{ll} \hat{\boldsymbol{\Sigma}} & \boldsymbol{0} \end{array}\right] \boldsymbol{U}^*\\ &=\boldsymbol{U}\left[\begin{array}{cc} \hat{\boldsymbol{\Sigma}}^2 & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} \end{array}\right] \boldsymbol{U}^* \\ \end{aligned} \]
\[ \begin{aligned} \mathbf{X X}^* \mathbf{U}&=\mathbf{U}\left[\begin{array}{cc} \hat{\boldsymbol{\Sigma}}^2 & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array}\right] \\ \end{aligned} \]
SVD is related to eigenvalue problem involving \(\boldsymbol{X X}^*\) and \(\boldsymbol{X}^* \boldsymbol{X}\):
\[ \begin{aligned} \boldsymbol{X}^* \boldsymbol{X}&=\boldsymbol{V}\left[\begin{array}{ll} \hat{\boldsymbol{\Sigma}} & \boldsymbol{0} \end{array}\right] \boldsymbol{U}^* \boldsymbol{U}\left[\begin{array}{c} \hat{\boldsymbol{\Sigma}} \\ \boldsymbol{0} \end{array}\right] \boldsymbol{V}^*\\ &=\boldsymbol{V} \hat{\boldsymbol{\Sigma}}^2 \boldsymbol{V}^* \end{aligned} \]
\[ \begin{aligned} \boldsymbol{X}^* \boldsymbol{X}&=\boldsymbol{V}\left[\begin{array}{ll} \hat{\boldsymbol{\Sigma}} & \boldsymbol{0} \end{array}\right] \boldsymbol{U}^* \boldsymbol{U}\left[\begin{array}{c} \hat{\boldsymbol{\Sigma}} \\ \boldsymbol{0} \end{array}\right] \boldsymbol{V}^*\\ &=\boldsymbol{V} \hat{\boldsymbol{\Sigma}}^2 \boldsymbol{V}^*\\ \mathbf{X}^* \mathbf{X} \mathbf{V}&=\mathbf{V} \hat{\mathbf{\Sigma}}^2 \end{aligned} \]
Orthogonal matrix \(Q\): all columns are linearly independent to each other
If \(Q\) is also orthnormal, \(Q\) is orthogonal and each column is of length 1
Therefore, if \(Q\) is orthonormal, \[ QQ^T = Q^TQ = I \]
Vector Space (Subspace) in \(\Re^m\)
A non-empty set \(\mathcal{S} \subseteq \Re^m\) is called a vector space in \(\Re^m\) (or a subspace of \(\Re^m\) ) if both of the following conditions are satisfied:
The above two criteria can be combined to say that a non-empty set \(\mathcal{S} \subseteq \Re^m\) is a subspace if \(\boldsymbol{x} + \alpha \boldsymbol{y} \in \mathcal{S}\) for every \(\boldsymbol{x}, \boldsymbol{y} \in \mathcal{S}\) and every \(\alpha\in\Re^1\).
Null Space of a Matrix \(\boldsymbol{A}\)
Let \(\boldsymbol{A}\) be an \(m \times n\) matrix in \(\Re^{m \times n}\). The null space of \(\boldsymbol{A}\) is defined as the set
\[ \mathcal{N}(\boldsymbol{A})=\left\{\boldsymbol{x} \in \Re^n: \boldsymbol{A} \boldsymbol{x}=\mathbf{0}\right\}. \]
Any member of the set \(\mathcal{N}(\boldsymbol{A})\) is an \(n \times 1\) vector, so \(\mathcal{N}(\boldsymbol{A})\) is a subset of \(\Re^n\).
Left Null Space of a Matrix \(\boldsymbol{A}\)
Let \(\boldsymbol{A}\) be an \(m \times n\) matrix in \(\Re^{m \times n}\). The left null space of \(\boldsymbol{A}\) is defined as the set
\[ \mathcal{N}\left(\boldsymbol{A}^{\prime}\right)=\left\{\boldsymbol{x} \in \Re^m: \boldsymbol{A}^{\prime} \boldsymbol{x}=\mathbf{0}\right\} \]
is called the . Any member of the set \(\mathcal{N}\left(\boldsymbol{A}^{\prime}\right)\) is an \(m \times 1\) vector, so \(\mathcal{N}\left(\boldsymbol{A}^{\prime}\right)\) is a subset of \(\Re^m\).
The four fundamental subspaces of a matrix \(A\) are
Linear Independence
Let \(\mathcal{A}=\left\{\boldsymbol{a}_1, a_2, \ldots, a_n\right\}\) be a finite set of vectors with each \(\boldsymbol{a}_i \in\) \(\Re^m\). The set \(\mathcal{A}\) is said to be linearly independent if the following condition holds: whenever \(x_i\) ’s are real numbers such that
\[ x_1 \boldsymbol{a}_1+x_2 \boldsymbol{a}_2+\cdots+x_n \boldsymbol{a}_n=\mathbf{0} \]
we have \(x_1=x_2=\cdots=x_n=0\). On the other hand, whenever there exist real numbers \(x_1, x_2, \ldots, x_n\), not all zero, such that \(x_1 \boldsymbol{a}_1+x_2 \boldsymbol{a}_2+\cdots+x_n \boldsymbol{a}_n=\mathbf{0}\), we say that \(\mathcal{A}\) is linearly dependent.
\[ \mathcal{A}_1=\left\{\left[\begin{array}{l} 1 \\ 0 \end{array}\right],\left[\begin{array}{l} 0 \\ 1 \end{array}\right]\right\} \quad \text { and } \quad \mathcal{A}_2=\left\{\left[\begin{array}{l} 1 \\ 0 \end{array}\right],\left[\begin{array}{l} 0 \\ 1 \end{array}\right],\left[\begin{array}{l} 2 \\ 3 \end{array}\right]\right\} . \]