While the term "orthonormal basis of a matrix" isn't a standard concept in linear algebra, an orthonormal basis is a fundamental property of a vector space. A matrix, however, often defines or is associated with various vector spaces (like its column space or row space), for which an orthonormal basis can be found.
An orthonormal basis is a special set of vectors that are both mutually perpendicular (orthogonal) and individually of unit length (normalized). This set provides a highly convenient and stable coordinate system for a given vector space.
What is an Orthonormal Basis?
An orthonormal basis for a vector space $V$ is a set of vectors, $B = { \mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n }$, that satisfies two key conditions:
- Orthogonality: Every pair of distinct vectors in the set is perpendicular to each other. In terms of an inner product $\langle \cdot, \cdot \rangle$, this means $\langle \mathbf{v}_i, \mathbf{v}_j \rangle = 0$ whenever $i \neq j$.
- Normality (Unit Length): Each vector in the set has a length (or norm) of 1. This means $\langle \mathbf{v}_i, \mathbf{v}_i \rangle = 1$ for all $i$.
Combining these two conditions, for an orthonormal basis, the inner product of any two vectors $\mathbf{v}_i$ and $\mathbf{v}j$ in the set is given by the Kronecker delta, $\delta{ij}$:
$\langle \mathbf{v}_i, \mathbf{v}j \rangle = \delta{ij} = \begin{cases} 1 & \text{if } i = j \ 0 & \text{if } i \neq j \end{cases}$
This property makes calculations and transformations much simpler within the vector space.
Why Orthonormal Bases are Important
- Simplifies Calculations: Projections, coordinate representations, and inverse operations become straightforward.
- Numerical Stability: They are less prone to computational errors in numerical algorithms due to their well-behaved properties.
- Intuitive Geometry: They represent a standard, "square" grid for the vector space.
How a Matrix Relates to an Orthonormal Basis
While a matrix itself doesn't have an orthonormal basis, its associated vector spaces do. Here are the primary ways matrices interact with orthonormal bases:
1. Orthonormal Basis for the Column Space or Row Space of a Matrix
For any matrix $A$, its column space ($Col(A)$) and row space ($Row(A)$) are vector spaces. We can find an orthonormal basis for these spaces.
- Method: The most common method to construct an orthonormal basis from an existing basis (e.g., the columns of a matrix forming a basis for its column space) is the Gram-Schmidt Process.
- Start with a basis ${ \mathbf{u}_1, \ldots, \mathbf{u}_k }$ for the subspace.
- Orthogonalize the vectors iteratively.
- Normalize each orthogonal vector to obtain unit vectors.
Example: Finding an Orthonormal Basis for a Column Space
Consider a matrix $A = \begin{pmatrix} 1 & 1 \ 1 & 0 \ 0 & 1 \end{pmatrix}$. Its columns are $\mathbf{a}_1 = \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$ and $\mathbf{a}_2 = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix}$. These form a basis for $Col(A)$. We can apply Gram-Schmidt:
- Let $\mathbf{v}_1 = \mathbf{a}_1 = \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$.
- Normalize $\mathbf{v}_1$: $\mathbf{e}_1 = \frac{\mathbf{v}_1}{||\mathbf{v}_1||} = \frac{1}{\sqrt{1^2+1^2+0^2}} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$.
- Find $\mathbf{v}_2$ by orthogonalizing $\mathbf{a}_2$ with respect to $\mathbf{e}_1$:
$\mathbf{v}_2 = \mathbf{a}_2 - \langle \mathbf{a}_2, \mathbf{e}_1 \rangle \mathbf{e}_1 = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} - \left( \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} \cdot \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} \right) \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$
$\mathbf{v}_2 = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} - \left( \frac{1}{\sqrt{2}} \right) \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} - \frac{1}{2} \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1/2 \ -1/2 \ 1 \end{pmatrix}$. - Normalize $\mathbf{v}_2$: $\mathbf{e}_2 = \frac{\mathbf{v}_2}{||\mathbf{v}_2||} = \frac{1}{\sqrt{(1/2)^2+(-1/2)^2+1^2}} \begin{pmatrix} 1/2 \ -1/2 \ 1 \end{pmatrix} = \frac{1}{\sqrt{1/4+1/4+1}} \begin{pmatrix} 1/2 \ -1/2 \ 1 \end{pmatrix} = \frac{1}{\sqrt{3/2}} \begin{pmatrix} 1/2 \ -1/2 \ 1 \end{pmatrix} = \frac{\sqrt{2}}{\sqrt{3}} \begin{pmatrix} 1/2 \ -1/2 \ 1 \end{pmatrix}$.
Thus, ${\mathbf{e}_1, \mathbf{e}_2}$ is an orthonormal basis for $Col(A)$.
2. Orthogonal Matrices: When Columns/Rows Are an Orthonormal Basis
A special type of square matrix called an orthogonal matrix directly embodies the concept of an orthonormal basis.
- A square matrix $Q$ is orthogonal if its columns form an orthonormal basis for $\mathbb{R}^n$.
- Equivalently, its rows also form an orthonormal basis for $\mathbb{R}^n$.
- A key property is $Q^T Q = I$ (where $I$ is the identity matrix), which implies $Q^T = Q^{-1}$.
Properties of Orthogonal Matrices
Property | Description |
---|---|
Preserves Length and Angle | Linear transformations represented by orthogonal matrices preserve the Euclidean length of vectors and the angle between them. This means they perform rotations or reflections. |
Determinant | The determinant of an orthogonal matrix is either +1 (for rotations) or -1 (for reflections). |
Inverse | The inverse of an orthogonal matrix is simply its transpose ($Q^{-1} = Q^T$), making inversions very easy to compute. |
Eigenvalues | The eigenvalues of an orthogonal matrix all have an absolute value of 1. |
Example of an Orthogonal Matrix
A 2D rotation matrix is a classic example:
$Q = \begin{pmatrix} \cos\theta & -\sin\theta \ \sin\theta & \cos\theta \end{pmatrix}$
For instance, if $\theta = 0$, $Q = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix}$. Its columns $\begin{pmatrix} 1 \ 0 \end{pmatrix}$ and $\begin{pmatrix} 0 \ 1 \end{pmatrix}$ are clearly an orthonormal basis for $\mathbb{R}^2$.
If $\theta = \pi/2$, $Q = \begin{pmatrix} 0 & -1 \ 1 & 0 \end{pmatrix}$. Its columns $\begin{pmatrix} 0 \ 1 \end{pmatrix}$ and $\begin{pmatrix} -1 \ 0 \end{pmatrix}$ are also an orthonormal basis for $\mathbb{R}^2$.
3. Orthonormal Bases in Matrix Decompositions
Orthonormal bases play a crucial role in powerful matrix decompositions that reveal a matrix's underlying structure.
- QR Decomposition: Any matrix $A$ can be decomposed into $A = QR$, where $Q$ is an orthogonal matrix (whose columns form an orthonormal basis) and $R$ is an upper triangular matrix. This is often a result of applying the Gram-Schmidt process.
- Singular Value Decomposition (SVD): For any matrix $A$ (not necessarily square), $A = U \Sigma V^T$. Here, $U$ and $V$ are orthogonal matrices. Their columns (left and right singular vectors) form orthonormal bases for the column space and row space of $A$, respectively. SVD is widely used in data compression, noise reduction, and recommendation systems.
- Spectral Decomposition (for Symmetric Matrices): If $A$ is a symmetric matrix, it can be diagonalized as $A = Q D Q^T$, where $Q$ is an orthogonal matrix whose columns are the orthonormal eigenvectors of $A$, and $D$ is a diagonal matrix containing the eigenvalues. This is fundamental in principal component analysis (PCA).
In essence, while a matrix doesn't inherently have an orthonormal basis, it frequently interacts with, produces, or is analyzed through vector spaces that do. Understanding orthonormal bases is critical for advanced matrix operations and their applications in various scientific and engineering fields.