Linear Algebra II

November 20, 2014

Lecture 28-29

Filed under: 2014 Fall — hkuyklau @ 8:59 PM

Today we learnt: Every linear operator {T} on inner product space has a “mate” {T^*} such that {T} and {T^*} are related by

\displaystyle  \langle T(v), w\rangle = \langle v, T^*(w)\rangle   {\forall} {v,w\in V}.

How is {T^*} defined? In today’s lecture we studied the definition of {T^*}, which is based on the important result: If {f:V\rightarrow {\mathbb R}} is a linear transformation, then there exists unique {z\in V} such that {f(v)=\langle v,z\rangle}.

  • {M_{\mathcal{B}}(T^*) = M_{\mathcal{B}}(T)^T} where {\mathcal{B}} is an orthonormal basis for {V}.

    This result tells the relation between the matrix representations of {T} and {T^*} w.r.t. orthonormal bases.

  • If {T=T^*}, then {M_{\mathcal{B}}(T)} is symmetric.

    Below we shall see that self-adjoint operator {T} (i.e. satisfying {T=T^*}) are particularly nice.

Recall that for linear operators on vector spaces, we study the concept of similarity and diagonalization: let us review a few important points below. By definition, “{A} is similar to {B}” means {P^{-1}AP=B} for some invertible matrix {P}.

  1. If {T:V\rightarrow V} is a linear operator on vector space (not necessarily inner product space), then {M_E(T)} is similar to {M_F(T)} for any ordered bases {E} and {F}.
  2. If {A} is similar to {B}, then {A} and {B} are matrix representations of the same linear operator.

Next suppose {A} is similar to a diagonal matrix {D} (i.e. {P^{-1}A P= D}). Write {P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}} and {D={\rm diag}(\lambda_1, \lambda_2,\cdots, \lambda_n)}, then from {P^{-1} AP=D}, we get {AP=PD}, i.e.

\displaystyle  \begin{array}{rcl}  && A \begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix} = \begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}\begin{pmatrix} \lambda_1 & & & \\ & \lambda_2 & & \\ & & \ddots & \\ & & & \lambda_n\end{pmatrix}\vspace{1mm}\\ \Rightarrow && \begin{pmatrix} A\underline{x}_1 & A\underline{x}_2 & \cdots & A\underline{x}_n\end{pmatrix} = \begin{pmatrix} \lambda_1\underline{x}_1 & \lambda_2\underline{x}_2 & \cdots & \lambda_n\underline{x}_n\end{pmatrix}\vspace{3mm}\\ \Rightarrow && A\underline{x}_i = \lambda_i \underline{x}_i, \quad {i=1,\cdots, n}. \end{array}

That means {\lambda_i} is the eigenvalue and {\underline{x}_i} is the corresponding eigenvector. Also, {P} is invertible if and only if {\underline{x}_1,\cdots, \underline{x}_n} form a basis for {{\mathbb R}^n}. The converse is also true: when we diagonalize a matrix {A} (assuming {A} is diagonalizable), how do we find {P} and {D}? We calculate the eigenvalues to get {D} and then calculate the eigenvectors to get {P}.

Furthermore, we can present the above result in the setting of linear transformation: Suppose {P^{-1}A P=D} where {P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}} and {D={\rm diag}(\lambda_1, \lambda_2,\cdots, \lambda_n)}. Then the linear operator {T_A:{\mathbb R}^n\rightarrow {\mathbb R}^n}, {T_A(\underline{v})= A\underline{v}} has the standard matrix representation {M_{St}(T_A)= A}. If we set {\mathcal{E}=[\underline{x}_1,\cdots, \underline{x}_n]} (the ordered basis consisting of eigenvectors), then {M_{\mathcal{E}}(T_A)= D} is diagonal. Using this viewpoint, we have another description (or criterion) for diagonalizable matrices:

     The matrix {A} is similar to a diagonal matrix
{\Leftrightarrow} There exists a basis {\mathcal{E}} for {{\mathbb R}^n} such that {M_{\mathcal{E}}(T_A)} is diagonal
{\Leftrightarrow} There exists a basis {\mathcal{E}=[\underline{x}_1,\cdots, \underline{x}_n]} for {{\mathbb R}^n} such that {T_A(\underline{x}_i)=\lambda_i \underline{x}_i}, {i=1,\cdots, n}
{\Leftrightarrow} We can find a basis {\mathcal{E}} consisting of eigenvectors of {A} for {{\mathbb R}^n}

Now we turn back to inner product spaces. If {V} is an inner product space, then we can consider more special basis — orthonormal basis (which are much more convenient, at least from the angle of computation). Hence we may consider the following question in the above diagoanlization problem:

{(**)}   Can we find an orthonormal basis {\mathcal{B}} such that {M_{\mathcal{B}}(T_A)} is diagonal?

In terms of matrices, this is equivalent to finding a set of orthonormal eigenvectors {\underline{x}_1,\cdots, \underline{x}_n} such that {P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}} such that {P^{-1}A P= D}.

Here we make a nice observation: if {P=\begin{pmatrix} \underline{x}_1 & \underline{x}_2 & \cdots & \underline{x}_n\end{pmatrix}} where {\langle \underline{x}_i, \underline{x}_j\rangle =1} for {i=j} and {0} for {i\neq j}, then

\displaystyle  P^TP=PP^T=I.

Such a matrix is called an orthogonal matrix (i.e. {A} is orthogonal if {A^TA=I}. Note that {A^TA=I} {\Rightarrow} {AA^T=I}.)

Hence for our problem {(**)} the condition {P^{-1}AP=D} can be rephrased as {P^T AP=D}. That’s why we invoke the concept of orthogonally diagonalizable.

Now we can state the following key result for self-adjoint linear operators (or in matrix setting, symmetric matrices):

Every {n\times n} symmetric matrix {A} has a set of orthonormal eigenvectors which form a basis for {{\mathbb R}^n}.

In the setting of linear operators, every self-adjoint operator {T:V\rightarrow V} (on inner product space) has a set of orthonormal eigenvectors which form a basis for {V}.

See also Lect28-29.pdf in the folder slides.
 

 

 

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.