Today we learnt: Every linear operator on inner product space has a “mate” such that and are related by
.
How is defined? In today’s lecture we studied the definition of , which is based on the important result: If is a linear transformation, then there exists unique such that .
- where is an orthonormal basis for .
This result tells the relation between the matrix representations of and w.r.t. orthonormal bases.
- If , then is symmetric.
Below we shall see that self-adjoint operator (i.e. satisfying ) are particularly nice.
Recall that for linear operators on vector spaces, we study the concept of similarity and diagonalization: let us review a few important points below. By definition, “ is similar to ” means for some invertible matrix .
- If is a linear operator on vector space (not necessarily inner product space), then is similar to for any ordered bases and .
- If is similar to , then and are matrix representations of the same linear operator.
Next suppose is similar to a diagonal matrix (i.e. ). Write and , then from , we get , i.e.
That means is the eigenvalue and is the corresponding eigenvector. Also, is invertible if and only if form a basis for . The converse is also true: when we diagonalize a matrix (assuming is diagonalizable), how do we find and ? We calculate the eigenvalues to get and then calculate the eigenvectors to get .
Furthermore, we can present the above result in the setting of linear transformation: Suppose where and . Then the linear operator , has the standard matrix representation . If we set (the ordered basis consisting of eigenvectors), then is diagonal. Using this viewpoint, we have another description (or criterion) for diagonalizable matrices:
There exists a basis for such that is diagonal
There exists a basis for such that ,
We can find a basis consisting of eigenvectors of for
Now we turn back to inner product spaces. If is an inner product space, then we can consider more special basis — orthonormal basis (which are much more convenient, at least from the angle of computation). Hence we may consider the following question in the above diagoanlization problem:
Can we find an orthonormal basis such that is diagonal?
In terms of matrices, this is equivalent to finding a set of orthonormal eigenvectors such that such that .
Here we make a nice observation: if where for and for , then
Such a matrix is called an orthogonal matrix (i.e. is orthogonal if . Note that .)
Hence for our problem the condition can be rephrased as . That’s why we invoke the concept of orthogonally diagonalizable.
Now we can state the following key result for self-adjoint linear operators (or in matrix setting, symmetric matrices):
Every symmetric matrix has a set of orthonormal eigenvectors which form a basis for .
In the setting of linear operators, every self-adjoint operator (on inner product space) has a set of orthonormal eigenvectors which form a basis for .
See also Lect28-29.pdf in the folder slides.
Leave a comment