Linear Algebra II

September 30, 2014

A brief summary

Filed under: 2014 Fall — Y.K. Lau @ 1:08 PM

We have finished Sections 7.1-7.2, please note that most of the examples are for your reading so we did not spare time to discuss them in lecture. This post is to give a brief summary and some remarks.

Firstly recall that a linear transformation is a function that satisfies the linearity conditions (T1) and (T2), see Definition 7.1. Verbally the linearity conditions preserve addition and scalar multiplication. Thm 1 shows basic properties of linear transformations. Thm 3 looks a bit complicated but is indeed important — it tells how to construct linear transformations. Next, we introduce the conept of kernel and image of a linear transformation. They can be viewed as a generalization of nullspace and column space of a matrix. In fact, kernel and image are subspaces and more importantly, satisfy the dimension theorem, see Thm 4. The proof of Thm 4 is a little technical but interesting The method of proof can be applied to get another result, that is, Thm 5. Below are the details.

Theorem 5. Let {T:V\rightarrow W} be a linear transformation and let {\{e_1,\cdots, e_r, e_{r+1}, \cdots, e_n\}} be a basis for {V} such that {\{e_{r+1},\cdots, e_n\}} is a basis for {\ker T}. Then {\{T(e_1),\cdots, T(e_r)\}} is a basis for {T(V)}.

Proof:

  • {T(e_1),\cdots, T(e_r)} are linearly independent.

    Consider {c_1T(e_1)+\cdots+c_r T(e_r)=0}. By linearity, {T(c_1e_1+\cdots+c_r e_r)=0}.

    Thus {c_1e_1+\cdots+c_r e_r\in \ker T}.

    As {\{e_{r+1},\cdots, e_n\}} is a basis for {\ker T},

    \displaystyle  c_1e_1+\cdots+c_r e_r = d_{r+1}e_{r+1}+\cdots + d_ne_n

    for some {d_{r+1},\cdots, d_n\in {\mathbb R}}.

    Rearranging, {c_1e_1+\cdots+c_r e_r +(- d_{r+1})e_{r+1}+\cdots + (-d_n)e_n=0}.

    As {\{e_1,\cdots, e_n\}} is a basis (so is linearly independent), {c_1=\cdots=c_r=-d_{r+1}=\cdots = -d_n=0}.

    i.e. {c_1=\cdots = c_r=0} is the possible coefficients such that {c_1T(e_1)+\cdots+c_r T(e_r)=0}.

  • {T(e_1),\cdots, T(e_r)} span {T(V)}.

    Let {w\in T(V)}. Then {w=T(v)} for some {v\in V}.

    As {\{e_1,\cdots, e_n\}} is a basis for {V}, {v= a_1e_1+\cdots +a_re_r+a_{r+1}e_{r+1}+\cdots a_ne_n}.

    Thus {T(v)= a_1T(e_1)+\cdots +a_rT(e_r)+a_{r+1}T(e_{r+1})+\cdots a_nT(e_n).}

    As {T(e_{r+1})=\cdots = T(e_n)=0}, {w=a_1T(e_1)+\cdots +a_rT(e_r)}.

    This holds for all {w\in T(V)}, thus {T(V)\subset {\rm Span}(T(e_1),\cdots, T(e_r))}.

Besides, today we proved the following result.

Let {T:V\rightarrow W} be a linear transformation, and let {w\in W}. Suppose {T(x_0)=w}. Then

\displaystyle  \{v\in V: \ T(v)=w\} = \{ x_0+u: \ u \in \ker T\}.

[Remark. We write {T^{-1}\{w\}= \{v\in V: \ T(v)=w\}}, the preimage of {w}.]

Proof: Suppose {T(v)=w} and {T(x_0)=w}. Then {T(v-x_0)=T(v)-T(x_0)=0}, i.e. {v-x_0\in \ker T} so {v=x_0+u} for some {u\in \ker T}.

Conversely, for any {u\in \ker T}, {T(x_0+u)= T(x_0) +T(u)=w+0=w}. This completes the proof.

Remark. This result is the theoretical basis for the solving method of some differential equations — a good example to demonstrate why we need to learn abstract vector spaces.

 

 

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: