这一节的内容不长但是证明很难。核心目的是证明 V V V可以成为有限个cyclic space的直和。首先介绍了 T T T-admissible的定义,是一个比invariant更强的定义,其能够保证多项式运算在子空间中有对应的分项(投影),Theorem 3是Cyclic Decomposition Theorem,与之前的Primary Decomposition Theorem相比,其说明 V V V可以成为惟一的有限个 T T T-admissible space的直和,且每个子空间都是一个cyclic space,其generator的 T T T-annihilator是可以递归整除的。在本节的前文中,作者称这个定理是one of the deepest results in linear algebra,证明确实非常繁复。这一定理有一系列重要的推论,例如每一个 T T T-admissible空间都有一个invariant的互补空间。Theorem 4是广义的Cayley-Hamiltion定理,在之前的最小多项式整除特征多项式的结论之上,还可以推出二者有相同的prime factors,且已知最小多项式就可以得出特征多项式。Theorem 5声明每个矩阵都相似于一个唯一的rational form的矩阵。

Exercises

1.Let T T T be the linear operator on F 2 F^2 F2 which is represented in the standard ordered basis by the matrix [ 0 0 1 0 ] \begin{bmatrix}0&0\\1&0\end{bmatrix} [0100]. Let α 1 = ( 0 , 1 ) \alpha_1=(0,1) α1=(0,1). Show that F 2 ≠ Z ( α 1 ; T ) F^2\neq Z(\alpha_1;T) F2=Z(α1;T), and that there is no non-zero vector α 2 \alpha_2 α2 in F 2 F^2 F2 with Z ( α 2 ; T ) Z(\alpha_2;T) Z(α2;T) disjoint from Z ( α 1 ; T ) Z(\alpha_1;T) Z(α1;T).
Solution: We have
T α 1 = [ 0 0 1 0 ] [ 0 1 ] = 0 T\alpha_1=\begin{bmatrix}0&0\\1&0\end{bmatrix}\begin{bmatrix}0\\1\end{bmatrix}=0 Tα1=[0100][01]=0
thus p α 1 = x p_{\alpha_1}=x pα1=x, which means dim ⁡ Z ( α 1 ; T ) = 1 \dim Z(\alpha_1;T)=1 dimZ(α1;T)=1, so F 2 ≠ Z ( α 1 ; T ) F^2\neq Z(\alpha_1;T) F2=Z(α1;T).
Suppose there is some α 2 = ( a , b ) ≠ 0 \alpha_2=(a,b)\neq 0 α2=(a,b)=0 such that Z ( α 2 ; T ) Z(\alpha_2;T) Z(α2;T) is disjoint from Z ( α 1 ; T ) Z(\alpha_1;T) Z(α1;T), then dim ⁡ Z ( α 2 ; T ) = 1 \dim Z(\alpha_2;T)=1 dimZ(α2;T)=1, which means p α 2 = x p_{\alpha_2}=x pα2=x or T α 2 = ( a , 0 ) = 0 T\alpha_2=(a,0)=0 Tα2=(a,0)=0, so α 2 = ( 0 , b ) ≠ 0 \alpha_2=(0,b)\neq 0 α2=(0,b)=0, but this means α 2 = b α 1 \alpha_2=b\alpha_1 α2=bα1, which contradicts the hypothesis that Z ( α 2 ; T ) Z(\alpha_2;T) Z(α2;T) is disjoint from Z ( α 1 ; T ) Z(\alpha_1;T) Z(α1;T).

2.Let T T T be a linear operator on the finite-dimensional space V V V, and let R R R be the range of T T T.
( a ) Prove that R R R has a complementary T T T-invariant subspace if and only if R R R is independent of the null space N N N of T T T.
( b ) If R R R and N N N are independent, prove that N N N is the unique T T T-invariant subspace complementary to R R R.
Solution:
( a ) If R R R is independent of N N N, then from dim ⁡ R + dim ⁡ N = dim ⁡ V \dim R+\dim N=\dim V dimR+dimN=dimV we know that R ⊕ N = V R\oplus N=V RN=V, and N N N is obviously T T T-invariant. Conversely, if R R R has a complementary T T T-invariant subspace R ′ R' R, let β ∈ R ′ \beta\in R' βR, then T β ∈ R ′ T\beta\in R' TβR, but also T β ∈ R T\beta\in R TβR, thus T β = 0 T\beta=0 Tβ=0 and β ∈ N \beta\in N βN, so R ′ ⊆ N R'\subseteq N RN, since dim ⁡ R ′ = dim ⁡ N = dim ⁡ V − dim ⁡ R \dim R'=\dim N=\dim V-\dim R dimR=dimN=dimVdimR, we know R ′ = N R'=N R=N and so R ∩ N = { 0 } R\cap N=\{0\} RN={0}.
( b ) Let R ′ R' R be any T T T-invariant subspace complementary to R R R, from the prood of (a) we can see that R ′ = N R'=N R=N, given R R R and N N N are independent.

3.Let T T T be the linear operator on R 3 R^3 R3 which is represented in the standard ordered basis by the matrix
[ 2 0 0 1 2 0 0 0 3 ] . \begin{bmatrix}2&0&0\\1&2&0\\0&0&3\end{bmatrix}. 210020003.
Let W W W be the null space of T − 2 I T-2I T2I. Prove that W W W has no complementary T T T-invariant subspace.
Solution: Assume there exists a T T T-invariant subspace W ′ W' W of R 3 R^3 R3 such that R 3 = W ⊕ W ′ R^3=W{\oplus}W' R3=WW, then let β = ϵ 1 \beta=\epsilon_1 β=ϵ1, we have ( T − 2 I ) β = ϵ 2 (T-2I)\beta=\epsilon_2 (T2I)β=ϵ2, since ( T − 2 I ) ϵ 2 = 0 (T-2I)\epsilon_2=0 (T2I)ϵ2=0 we see that ( T − 2 I ) β ∈ W (T-2I)\beta\in W (T2I)βW. On the other hand, since β ∈ R 3 \beta\in R^3 βR3, we can find α ∈ W , γ ∈ W ′ \alpha\in W,\gamma\in W' αW,γW such that β = α + γ \beta=\alpha+\gamma β=α+γ, so
( T − 2 I ) β = ( T − 2 I ) α + ( T − 2 I ) γ ∈ W (T-2I)\beta=(T-2I)\alpha+(T-2I)\gamma\in W (T2I)β=(T2I)α+(T2I)γW
Since W ′ W' W is T T T-invariant, we see that ( T − 2 I ) γ = 0 (T-2I)\gamma=0 (T2I)γ=0 and ( T − 2 I ) β = ( T − 2 I ) α (T-2I)\beta=(T-2I)\alpha (T2I)β=(T2I)α, but α ∈ W \alpha\in W αW means ( T − 2 I ) α = 0 (T-2I)\alpha=0 (T2I)α=0, but ( T − 2 I ) β = ϵ 2 (T-2I)\beta=\epsilon_2 (T2I)β=ϵ2, this is a contradiction.

4.Let T T T be the linear operator on F 4 F^4 F4 which is represented in the standard ordered basis by the matrix
[ c 0 0 0 1 c 0 0 0 1 c 0 0 0 1 c ] . \begin{bmatrix}c&0&0&0\\1&c&0&0\\0&1&c&0\\0&0&1&c\end{bmatrix}. c1000c1000c1000c.
Let W W W be the null space of T − c I T-cI TcI.
( a ) Prove that W W W is the subspace spanned by ϵ 4 \epsilon_4 ϵ4.
( b ) Find the monic generators of the ideals S ( ϵ 4 ; W ) S(\epsilon_4;W) S(ϵ4;W), S ( ϵ 3 ; W ) S(\epsilon_3;W) S(ϵ3;W), S ( ϵ 2 ; W ) S(\epsilon_2;W) S(ϵ2;W), S ( ϵ 1 ; W ) S(\epsilon_1;W) S(ϵ1;W).
Solution:
( a ) A direct computation shows that the matrix of T − c I T-cI TcI in the standard ordered basis is the matrix
[ 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 ] \begin{bmatrix}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix} 0100001000010000
and we have ( T − c I ) ( ∑ i = 1 4 a i ϵ i ) = a 1 ϵ 2 + a 2 ϵ 3 + a 3 ϵ 4 (T-cI)(\sum_{i=1}^4a_i\epsilon_i)=a_1\epsilon_2+a_2\epsilon_3+a_3\epsilon_4 (TcI)(i=14aiϵi)=a1ϵ2+a2ϵ3+a3ϵ4, thus W W W consists of all vectors of the form a ϵ 4 a\epsilon_4 aϵ4.
( b ) As ϵ 4 \epsilon_4 ϵ4 is already in W W W, we have f ( T ) ϵ 4 ∈ W f(T)\epsilon_4\in W f(T)ϵ4W for all f ∈ F [ x ] f\in F[x] fF[x], thus the monic generator of S ( ϵ 4 ; W ) S(\epsilon_4;W) S(ϵ4;W) is 1 1 1.
We have T ϵ 3 = ϵ 4 T\epsilon_3=\epsilon_4 Tϵ3=ϵ4, so the monic generator of S ( ϵ 3 ; W ) S(\epsilon_3;W) S(ϵ3;W) is x x x. By the same logic, the monic generator of S ( ϵ 2 ; W ) S(\epsilon_2;W) S(ϵ2;W) is x 2 x^2 x2 and the monic generator of S ( ϵ 1 ; W ) S(\epsilon_1;W) S(ϵ1;W) is x 3 x^3 x3.

5.Let T T T be a linear operator on the vector space V V V over the field F F F. If f f f is a polynomial over F F F and α ∈ V \alpha\in V αV, let f α = f ( T ) α f\alpha=f(T)\alpha fα=f(T)α. If V 1 , … , V k V_1,\dots,V_k V1,,Vk are T T T-invariable subspaces and V = V 1 ⊕ ⋯ ⊕ V k V=V_1\oplus\cdots\oplus V_k V=V1Vk, show that f V = f V 1 ⊕ ⋯ ⊕ f V k fV=fV_1\oplus\cdots\oplus fV_k fV=fV1fVk.
Solution: For α ∈ V \alpha\in V αV, we have α = α 1 + ⋯ + α k \alpha=\alpha_1+\cdots+\alpha_k α=α1++αk, in which α i ∈ V i \alpha_i\in V_i αiVi for i = 1 , … , k i=1,\dots,k i=1,,k, so
f α = f ( T ) α = f ( T ) ( α 1 + ⋯ + α k ) = ∑ i = 1 k f ( T ) α i = ∑ i = 1 k f α i f\alpha=f(T)\alpha=f(T)(\alpha_1+\cdots+\alpha_k)=\sum_{i=1}^kf(T)\alpha_i=\sum_{i=1}^kf\alpha_i fα=f(T)α=f(T)(α1++αk)=i=1kf(T)αi=i=1kfαi
this shows f V = f V 1 + ⋯ + f V k fV=fV_1+\cdots+ fV_k fV=fV1++fVk. To see the sum is a direct sum, let β ∈ f V i ∩ f V j \beta\in fV_i\cap fV_j βfVifVj with i ≠ j i\neq j i=j, then we can find β ′ ∈ V i , γ ′ ∈ V j \beta'\in V_i,\gamma'\in V_j βVi,γVj such that β = f ( T ) β ′ = f ( T ) γ ′ \beta=f(T)\beta'=f(T)\gamma' β=f(T)β=f(T)γ, since V i V_i Vi and V j V_j Vj are T T T-invariant, we have β ∈ V i \beta\in V_i βVi and β ∈ V j \beta\in V_j βVj, so β ∈ V i ∩ V j \beta\in V_i\cap V_j βViVj and β = 0 \beta=0 β=0, this shows V i V_i Vi and V j V_j Vj are independent.

6.Let T , V , F T,V,F T,V,F be as in Exercise 5. Suppose α \alpha α and β \beta β are vectors in V V V which have the same T T T-annihilator. Prove that, for any polynomial f f f, the vectors f α f\alpha fα and f β f\beta fβ have the same T T T-annihilator.
Solution: Let p p p be the T T T-annihilator of both α \alpha α and β \beta β. Suppose the T T T-annihilator of f α f\alpha fα is q q q, then q f α = 0 qf\alpha=0 qfα=0, which means q f qf qf is in the ideal generated by p p p, so we can find polynomial h h h such that q f = p h qf=ph qf=ph, this means q f β = p h β = h p β = 0 qf\beta=ph\beta=hp\beta=0 qfβ=phβ=hpβ=0, thus the T T T-annihilator of f β f\beta fβ divides q q q, with the same logic applying to the T T T-annihilator of f β f\beta fβ, we see q q q divides the T T T-annihilator of f β f\beta fβ, thus they are the same.

7.Find the minimal polynomials and the rational forms of each of the following real matrices.
[ 0 − 1 − 1 1 0 0 − 1 0 0 ] , [ c 0 − 1 0 c 1 − 1 1 c ] , [ cos ⁡ θ sin ⁡ θ − sin ⁡ θ cos ⁡ θ ] \begin{bmatrix}0&-1&-1\\1&0&0\\-1&0&0\end{bmatrix},\quad \begin{bmatrix}c&0&-1\\0&c&1\\-1&1&c\end{bmatrix},\quad\begin{bmatrix}\cos\theta&\sin{\theta}\\{-\sin\theta}&{\cos\theta}\end{bmatrix} 011100100,c010c111c,[cosθsinθsinθcosθ]
Solution: For the first matrix, we compute the characteristic polynomial
∣ x 1 1 − 1 x 0 1 0 x ∣ = x 3 + x − x = x 3 \begin{vmatrix}x&1&1\\-1&x&0\\1&0&x\end{vmatrix}=x^3+x-x=x^3 x111x010x=x3+xx=x3
and the minimal polynomial is also x 3 x^3 x3. Thus the rational form of this matrix is
[ 0 0 0 1 0 0 0 1 0 ] \begin{bmatrix}0&0&0\\1&0&0\\0&1&0\end{bmatrix} 010001000
For the second matrix we compute the characteristic polynomial
∣ x − c 0 1 0 x − c − 1 1 − 1 x − c ∣ = ( x − c ) [ ( x − c ) 2 − 1 ] − ( x − c ) = ( x − c ) [ ( x − c ) 2 − 2 ] = x 3 − 3 c x 2 + ( 3 c 2 − 2 ) x − c 3 + 2 c \begin{aligned}\begin{vmatrix}x-c&0&1\\0&x-c&-1\\1&-1&x-c\end{vmatrix}&=(x-c)[(x-c)^2-1]-(x-c)\\&=(x-c)[(x-c)^2-2]\\&=x^3-3cx^2+(3c^2-2)x-c^3+2c\end{aligned} xc010xc111xc=(xc)[(xc)21](xc)=(xc)[(xc)22]=x33cx2+(3c22)xc3+2c
and the minimal polynomial is also x 3 − 3 c x 2 + ( 3 c 2 − 2 ) x − c 3 + 2 c x^3-3cx^2+(3c^2-2)x-c^3+2c x33cx2+(3c22)xc3+2c. Thus the rational form of this matrix is
[ 0 0 c 3 − 2 c 1 0 − 3 c 2 + 2 0 1 3 c ] \begin{bmatrix}0&0&c^3-2c\\1&0&-3c^2+2\\0&1&3c\end{bmatrix} 010001c32c3c2+23c
For the third matrix we compute the characteristic polynomial
∣ x − cos ⁡ θ − sin ⁡ θ sin ⁡ θ x − cos ⁡ θ ∣ = x 2 − 2 cos ⁡ θ x + 1 \begin{vmatrix}x-\cos\theta&-\sin\theta\\\sin\theta&x-\cos\theta\end{vmatrix}=x^2-2\cos\theta x+1 xcosθsinθsinθxcosθ=x22cosθx+1
and the minimal polynomial is also x 2 − 2 cos ⁡ θ x + 1 x^2-2\cos\theta x+1 x22cosθx+1. Thus the rational form of this matrix is [ 0 − 1 1 2 cos ⁡ θ ] \begin{bmatrix}0&-1\\1&2\cos\theta\end{bmatrix} [0112cosθ].

8.Let T T T be the linear operator on R 3 R^3 R3 which is represented in the standard basis by
[ 3 − 4 − 4 − 1 3 2 2 − 4 − 3 ] . \begin{bmatrix}3&-4&-4\\-1&3&2\\2&-4&-3\end{bmatrix}. 312434423.
Find non-zero vectors α 1 , … , α r \alpha_1,\dots,\alpha_r α1,,αr satisfying the conditions of Theorem 3.
Solution: We first compute the characteristic polynomial of T T T:
f = ∣ x − 3 4 4 1 x − 3 − 2 − 2 4 x + 3 ∣ = ∣ x − 3 4 4 1 x − 3 − 2 0 2 x − 2 x − 1 ∣ = ( x − 1 ) 3 f=\begin{vmatrix}x-3&4&4\\1&x-3&-2\\-2&4&x+3\end{vmatrix}=\begin{vmatrix}x-3&4&4\\1&x-3&-2\\0&2x-2&x-1\end{vmatrix}=(x-1)^3 f=x3124x3442x+3=x3104x32x242x1=(x1)3
Now the matrix of T − I T-I TI is obviously not zero and the matrix of ( T − I ) 2 (T-I)^2 (TI)2 is
[ 2 − 4 − 4 − 1 2 2 2 − 4 − 4 ] [ 2 − 4 − 4 − 1 2 2 2 − 4 − 4 ] = 0 \begin{bmatrix}2&-4&-4\\-1&2&2\\2&-4&-4\end{bmatrix}\begin{bmatrix}2&-4&-4\\-1&2&2\\2&-4&-4\end{bmatrix}=0 212424424212424424=0
thus the minimal polynomial for T T T is p = ( x − 1 ) 2 p=(x-1)^2 p=(x1)2. Since T ϵ 1 = ( 3 , − 1 , 2 ) T\epsilon_1=(3,-1,2) Tϵ1=(3,1,2) which is not a scalar multiple of ϵ 1 \epsilon_1 ϵ1, Z ( ϵ 1 ; T ) Z(\epsilon_1;T) Z(ϵ1;T) has dimension 2 and consists of all vectors
a ϵ 1 + b T ϵ 1 = a ( 1 , 0 , 0 ) + b ( 3 , − 1 , 2 ) = ( a + 3 b , − b , 2 b ) a\epsilon_1+bT\epsilon_1=a(1,0,0)+b(3,-1,2)=(a+3b,-b,2b) aϵ1+bTϵ1=a(1,0,0)+b(3,1,2)=(a+3b,b,2b)
So we can let α 1 = ϵ 1 \alpha_1=\epsilon_1 α1=ϵ1, the vector α 2 \alpha_2 α2 must be a characteristic vector of T T T which is not in Z ( ϵ 1 ; T ) Z(\epsilon_1;T) Z(ϵ1;T), As we can see if α = ( x 1 , x 2 , x 3 ) \alpha=(x_1,x_2,x_3) α=(x1,x2,x3), then T α = α T\alpha=\alpha Tα=α means α \alpha α is in the form ( 2 a + 2 b , a , b ) (2a+2b,a,b) (2a+2b,a,b), let a = 1 , b = 1 a=1,b=1 a=1,b=1 we see that we can make α 2 = ( 4 , 1 , 1 ) \alpha_2=(4,1,1) α2=(4,1,1).

9.Let A A A be the real matrix
A = [ 1 3 3 3 1 3 − 3 − 3 − 5 ] . A=\begin{bmatrix}1&3&3\\3&1&3\\-3&-3&-5\end{bmatrix}. A=133313335.
Find an invertible 3 × 3 3\times 3 3×3 real matrix P P P such that P − 1 A P P^{-1}AP P1AP is in rational form.
Solution: First compute the characteristic polynomial for A A A
det ⁡ ( x I − A ) = ∣ x − 1 − 3 − 3 − 3 x − 1 − 3 3 3 x + 5 ∣ = ∣ x − 1 − 3 − 3 − 3 x − 1 − 3 0 x + 2 x + 2 ∣ = ( x + 2 ) ( x 2 − 2 x + 1 − 9 + 3 x − 3 + 9 ) = ( x + 2 ) 2 ( x − 1 ) \begin{aligned}\det (xI-A)&=\begin{vmatrix}x-1&-3&-3\\-3&x-1&-3\\3&3&x+5\end{vmatrix}=\begin{vmatrix}x-1&-3&-3\\-3&x-1&-3\\0&x+2&x+2\end{vmatrix}\\&=(x+2)(x^2-2x+1-9+3x-3+9)\\&=(x+2)^2(x-1)\end{aligned} det(xIA)=x1333x1333x+5=x1303x1x+233x+2=(x+2)(x22x+19+3x3+9)=(x+2)2(x1)
and since
( A + 2 I ) ( A − I ) = [ 3 3 3 3 3 3 − 3 − 3 − 3 ] [ 0 3 3 3 0 3 − 3 − 3 − 6 ] = 0 (A+2I)(A-I)=\begin{bmatrix}3&3&3\\3&3&3\\-3&-3&-3\end{bmatrix}\begin{bmatrix}0&3&3\\3&0&3\\-3&-3&-6\end{bmatrix}=0 (A+2I)(AI)=333333333033303336=0
the minimal polynomial for A A A is ( x + 2 ) ( x − 1 ) = x 2 + x − 2 (x+2)(x-1)=x^2+x-2 (x+2)(x1)=x2+x2.
Since A ϵ 1 = ( 1 , 3 , − 3 ) A\epsilon_1=(1,3,-3) Aϵ1=(1,3,3) is not a scalar multiple of ϵ 1 \epsilon_1 ϵ1, one subspace can be ( ϵ 1 , A ϵ 1 ) (\epsilon_1,A\epsilon_1) (ϵ1,Aϵ1), which consists of vectors like ( a + b , 3 b , − 3 b ) (a+b,3b,-3b) (a+b,3b,3b), choose a characteristic vector associated with the characteristic value − 2 -2 2, which may be ( 1 , 1 , − 2 ) (1,1,-2) (1,1,2), then let
P = [ 1 1 1 0 3 1 0 − 3 − 2 ] P=\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix} P=100133112
we have det ⁡ P = − 3 ≠ 0 \det P=-3\neq 0 detP=3=0, thus P P P is invertible, and
A P = [ 1 3 3 3 1 3 − 3 − 3 − 5 ] [ 1 1 1 0 3 1 0 − 3 − 2 ] = [ 1 1 − 2 3 − 3 − 2 − 3 3 4 ] AP=\begin{bmatrix}1&3&3\\3&1&3\\-3&-3&-5\end{bmatrix}\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix}=\begin{bmatrix}1&1&-2\\3&-3&-2\\-3&3&4\end{bmatrix} AP=133313335100133112=133133224
the rational form of A A A is clearly [ 0 2 0 1 − 1 0 0 0 − 2 ] \begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix} 010210002, and we have
P [ 0 2 0 1 − 1 0 0 0 − 2 ] = [ 1 1 1 0 3 1 0 − 3 − 2 ] [ 0 2 0 1 − 1 0 0 0 − 2 ] = [ 1 1 − 2 3 − 3 − 2 − 3 3 4 ] P\begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix}=\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix}\begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix}=\begin{bmatrix}1&1&-2\\3&-3&-2\\-3&3&4\end{bmatrix} P010210002=100133112010210002=133133224
thus P P P is the matrix we need.

10.Let F F F be a subfield of the complex numbers and let T T T be the linear operator on F 4 F^4 F4 which is represented in the standard ordered basis by the matrix
[ 2 0 0 0 1 2 0 0 0 a 2 0 0 0 b 2 ] . \begin{bmatrix}2&0&0&0\\1&2&0&0\\0&a&2&0\\0&0&b&2\end{bmatrix}. 210002a0002b0002.
Find the characteristic polynomial for T T T. Consider the cases a = b = 1 a=b=1 a=b=1; a = b = 0 a=b=0 a=b=0; a = 0 , b = 1 a=0,b=1 a=0,b=1. In each of these cases, find the minimal polynomial for T T T and non-zero vectors α 1 , … , α r \alpha_1,\dots,\alpha_r α1,,αr which satisfy the conditions of Theorem 3.
Solution: The characteristic polynomial for T T T is ( x − 2 ) 4 (x-2)^4 (x2)4.
In the case a = b = 1 a=b=1 a=b=1, the minimal polynomial for T T T is ( x − 2 ) 4 (x-2)^4 (x2)4, since for ϵ 1 = ( 1 , 0 , 0 , 0 ) \epsilon_1=(1,0,0,0) ϵ1=(1,0,0,0), we have T ϵ 1 = ( 1 , 1 , 0 , 0 ) T\epsilon_1=(1,1,0,0) Tϵ1=(1,1,0,0), T 2 ϵ 1 = ( 1 , 2 , 1 , 0 ) T^2\epsilon_1=(1,2,1,0) T2ϵ1=(1,2,1,0), T 3 ϵ 1 = ( 1 , 3 , 3 , 1 ) T^3\epsilon_1=(1,3,3,1) T3ϵ1=(1,3,3,1), we have α 1 = ϵ 1 \alpha_1=\epsilon_1 α1=ϵ1.
In the case a = b = 0 a=b=0 a=b=0 and a = 0 , b = 1 a=0,b=1 a=0,b=1, the minimal polynomial for T T T is ( x − 2 ) 2 (x-2)^2 (x2)2. The non-zero vectors α i \alpha_i αi satisfing Theorem 3 can be found using techniques in the proof of Theorem 3.

11.Prove that if A A A and B B B are 3 × 3 3\times 3 3×3 matrices over a field F F F, a necessary and sufficient condition that A A A and B B B be similar over F F F is that they have the same characteristic polynomial and the same minimal polynomial. Give an example which shows that this is false for 4 × 4 4\times 4 4×4 matrices.
Solution: If A A A and B B B are similar, then both are similar to the same matrix R R R which is in rational form, thus the minimal polynomial for A A A and B B B are the same. Let it be p p p.
If deg ⁡ p = 3 \deg p=3 degp=3, then it is the characteristic polynomial for both A A A and B B B.
If deg ⁡ p = 2 \deg p=2 degp=2, then R R R must have the form [ A 1 0 0 a ] \begin{bmatrix}A_1&0\\0&a\end{bmatrix} [A100a], where A 1 A_1 A1 is a 2 × 2 2\times 2 2×2 matrix in the rational form, then the characteristic polynomial for A A A and B B B are p ( x − a ) p(x-a) p(xa).
If deg ⁡ p = 1 \deg p=1 degp=1, then R R R must be diagonal, then it is apparent the characteristic polynomial for A A A and B B B are equal.
Conversely, if A A A and B B B have the same characteristic polynomial f f f and the same minimal polynomial p p p. We find the unique matrix in the rational form that A A A and B B B are similar to, namely R A R_A RA and R B R_B RB, then
If deg ⁡ p = 3 \deg p=3 degp=3, then we must have R A = R B R_A=R_B RA=RB.
If deg ⁡ p = 2 \deg p=2 degp=2, then R A = [ A 1 0 0 a ] R_A=\begin{bmatrix}A_1&0\\0&a\end{bmatrix} RA=[A100a], R B = [ A 1 0 0 b ] R_B=\begin{bmatrix}A_1&0\\0&b\end{bmatrix} RB=[A100b], where A 1 A_1 A1 is a 2 × 2 2\times 2 2×2 matrix in the rational form, and f / p = x − a = x − b f/p=x-a=x-b f/p=xa=xb means a = b a=b a=b, so R A = R B R_A=R_B RA=RB.
If deg ⁡ p = 1 \deg p=1 degp=1, then R A R_A RA and R B R_B RB must be diagonal, since the characteristic polynomial for A A A and B B B are equal ,we have R A = R B R_A=R_B RA=RB.
Since A A A is similar to R A R_A RA and B B B is similar to R B R_B RB, we have A A A similar to B B B.
For a counterexample of 4 × 4 4\times 4 4×4 matrix, we let
A = [ 0 − 1 1 2 1 1 ] , B = [ 0 − 1 1 2 0 − 1 1 2 ] A=\begin{bmatrix}0&-1\\1&2\\&&1\\&&&1\end{bmatrix},\quad B=\begin{bmatrix}0&-1\\1&2\\&&0&-1\\&&1&2\end{bmatrix} A=011211,B=01120112
The characteristic polynomial of A A A and B B B is ( x − 1 ) 4 (x-1)^4 (x1)4, the minimal polynomial of A A A and B B B is ( x − 1 ) 2 (x-1)^2 (x1)2, but A A A and B B B are not similar.

12.Let F F F be a subfield of the field of complex numbers, and let A A A and B B B be n × n n\times n n×n matrices over F F F. Prove that if A A A and B B B are similar over the field of complex numbers, then they are similar over F F F.
Solution: The rational form of A A A is a matrix over F F F, thus a matrix over C C C, likewise for B B B. Thus if A A A and B B B are similar over the field of complex numbers, due to Theorem 5, the rational form of A A A is the same as B B B over C C C, which means A A A and B B B are similar to the same rational form over F F F, and the conclusion follows.

13.Let A A A be an n × n n\times n n×n matrix with complex entires. Prove that if every characteristic value of A A A is real, then A A A is similar to a matrix with real entries.
Solution: The characteristic polynomial for A A A contains only linear factors, and so is the minimal polynomial p p p for A A A, since every characteristic value of A A A is real, we see p p p consists only real coefficients.
Let T T T be the linear operator on F n F^n Fn which is represented by A A A in the standard basis, then there is an ordered basis B \mathfrak B B for V V V such that
[ T ] B = [ A 1 A 2 ⋱ A r ] [T]_{\mathfrak B}=\begin{bmatrix}A_1\\&A_2\\&&{\ddots}\\&&&&A_r\end{bmatrix} [T]B=A1A2Ar
where each A i A_i Ai is the companion matrix of some polynomial p i p_i pi, and p 1 = p p_1=p p1=p, and p i ∣ p p_i|p pip for i = 2 , … , r i=2,\dots,r i=2,,r, since p p p consists only real coefficients, so are all p i p_i pi, which means all A i A_i Ai have real entries, and so is [ T ] B [T]_{\mathfrak B} [T]B, apparently, A A A is similar to [ T ] B [T]_{\mathfrak B} [T]B.

14.Let T T T be a linear operator on the finite-dimensional space V V V. Prove that there exists a vector α ∈ V \alpha\in V αV with this property. If f f f is a polynomial and f ( T ) α = 0 f(T)\alpha=0 f(T)α=0, then f ( T ) = 0 f(T)=0 f(T)=0. (Such a vector α \alpha α is called a separating vector for the algebra of polynomials in T T T.) When T T T has a cyclic vector, give a direct proof that any cyclic vector is a separating vector for the algebra of polynomials in T T T.
Solution: We first prove if α \alpha α is a cyclic vector of T T T, then α \alpha α is a separating vector for the algebra of polynomials in T T T. Suppose dim ⁡ V = n \dim V=n dimV=n, then α , … , T n − 1 α \alpha,\dots,T^{n-1}\alpha α,,Tn1α span V V V, so for any β ∈ V \beta\in V βV, we have β = g ( T ) α \beta=g(T)\alpha β=g(T)α for some polynomial g g g. Now if f f f is a polynomial and f ( T ) α = 0 f(T)\alpha=0 f(T)α=0, we have
f ( T ) β = f ( T ) g ( T ) α = g ( T ) f ( T ) α = g ( T ) 0 = 0 f(T)\beta=f(T)g(T)\alpha=g(T)f(T)\alpha=g(T)0=0 f(T)β=f(T)g(T)α=g(T)f(T)α=g(T)0=0
thus f ( T ) = 0 f(T)=0 f(T)=0.
Now for any linear operator T T T on V V V, use the Cyclic Decomposition Theorem, we can write V = Z ( α 1 ; T ) ⊕ ⋯ ⊕ Z ( α r ; T ) V=Z(\alpha_1;T)\oplus\cdots\oplus Z(\alpha_r;T) V=Z(α1;T)Z(αr;T), let α = α 1 + ⋯ + α r \alpha=\alpha_1+\cdots+\alpha_r α=α1++αr, then f ( T ) α = 0 f(T)\alpha=0 f(T)α=0 means f ( T ) α i = 0 f(T)\alpha_i=0 f(T)αi=0 since each Z ( α i ; T ) Z(\alpha_i;T) Z(αi;T) is invariant under T T T, and V V V is the direct sum of all Z ( α i ; T ) Z(\alpha_i;T) Z(αi;T). It follows that f ( T ) = 0 f(T)=0 f(T)=0 on Z ( α i ; T ) Z(\alpha_i;T) Z(αi;T) for i = 1 , … , r i=1,\dots,r i=1,,r, which means f ( T ) = 0 f(T)=0 f(T)=0 on V V V.

15.Let F F F be a subfield of the field of complex numbers, and let A A A be an n × n n\times n n×n matrix over F F F. Let p p p be the minimal polynomial for A A A. If we regard A A A as a matrix over C C C, then A A A has a minimal polynomial f f f as an n × n n\times n n×n matrix over C C C. Use a theorem on linear equations to prove p = f p=f p=f. Can you also see how this follows from the cyclic decomposition theorem?
Solution: If we write p = c 0 + c 1 x + ⋯ + x k p=c_0+c_1x+\cdots+x^k p=c0+c1x++xk, then p ( A ) = 0 p(A)=0 p(A)=0 means ( c 0 , c 1 , … , 1 ) (c_0,c_1,\dots,1) (c0,c1,,1) is a solution for the system
x 1 I + ⋯ + x k + 1 A k = 0 x_1I+\cdots+x_{k+1}A^k=0 x1I++xk+1Ak=0
in the field F F F, and likewise, f = d 0 + d 1 x + ⋯ + x k f=d_0+d_1x+\cdots+x^k f=d0+d1x++xk is the polynomial which has coefficients ( d 0 , d 1 , … , 1 ) (d_0,d_1,\dots,1) (d0,d1,,1) as a solution for the same system in the field C C C, then ( d 0 , d 1 , … , 1 ) (d_0,d_1,\dots,1) (d0,d1,,1) is a solution in F F F due to the final remark in Sec 1.4. Thus both ( c 0 , c 1 , … , 1 ) (c_0,c_1,\dots,1) (c0,c1,,1) and ( d 0 , d 1 , … , 1 ) (d_0,d_1,\dots,1) (d0,d1,,1) are in the solution space for x 1 I + ⋯ + x k + 1 A k = 0 x_1I+\cdots+x_{k+1}A^k=0 x1I++xk+1Ak=0. To prove p = f p=f p=f, assume there is some c i ≠ d i c_i\neq d_i ci=di, then ( c 0 , c 1 , … , c k − 1 ) − ( d 0 , d 1 , … , d k − 1 ) (c_0,c_1,\dots,c_{k-1})-(d_0,d_1,\dots,d_{k-1}) (c0,c1,,ck1)(d0,d1,,dk1) is a non-trivial solution for the system
x 1 I + ⋯ + x k A k − 1 = 0 x_1I+\cdots+x_{k}A^{k-1}=0 x1I++xkAk1=0
Let h i = c i − d i h_i=c_i-d_i hi=cidi and h = h 0 + h 1 x + ⋯ + h k − 1 x k − 1 h=h_0+h_1x+\cdots+h_{k-1}x^{k-1} h=h0+h1x++hk1xk1, we see h ( A ) = 0 h(A)=0 h(A)=0, but deg ⁡ h < deg ⁡ p \deg h<\deg p degh<degp, a contradiction.
To get this result from the cyclic decomposition theorem, notice that by Exercise 12, A A A has the same rational form in F F F and C C C, and the first block matrix of the rational form of A A A is the companion matrix of p p p in F F F and f f f in C C C, we have p = f p=f p=f.

16.Let A A A be an n × n n\times n n×n matrix with real entries such that A 2 + I = 0 A^2+I=0 A2+I=0. Prove that n n n is even, and if n = 2 k n=2k n=2k, then A A A is similar over the field of real numbers to a matrix of the block form [ 0 − I I 0 ] \begin{bmatrix}0&-I\\I&0\end{bmatrix} [0II0] where I I I is the k × k k\times k k×k identity matrix.
Solution: The minimal polynomial for A A A is x 2 + 1 x^2+1 x2+1, by the generalized Cayley-Hamilton Theorem, the characteristic polynomial for A A A must be of the form f = ( x 2 + 1 ) k f=(x^2+1)^k f=(x2+1)k, so n = deg ⁡ f n=\deg f n=degf is even.
If n = 2 k n=2k n=2k, we know A A A is similar to one and only one matrix B B B in the rational form. If we write
B = [ A 1 ⋱ A r ] B=\begin{bmatrix}A_1\\&\ddots\\&&A_r\end{bmatrix} B=A1Ar
where each A i A_i Ai is the companion matrix of p i p_i pi, and p i + 1 p_{i+1} pi+1 divides p i p_i pi, from the proof of Theorem 3 we know p 1 = x 2 + 1 p_1=x^2+1 p1=x2+1, and the only possible polynomial which divides x 2 + 1 x^2+1 x2+1 is x 2 + 1 x^2+1 x2+1 and 1 1 1. Since 1 1 1 can only be the annihilator of zero vectors, we see that
B = [ A 1 ⋱ A k ] , A i = [ − 1 1 ] , i = 1 , … , k B=\begin{bmatrix}A_1\\&\ddots\\&&A_k\end{bmatrix}, \quad A_i=\begin{bmatrix}&-1\\1\end{bmatrix},i=1,\dots,k B=A1Ak,Ai=[11],i=1,,k
Let B = { ϵ 1 , … , ϵ n } \mathscr B=\{\epsilon_1,\dots,\epsilon_n\} B={ϵ1,,ϵn} be a basis for R n R^n Rn and T T T is the linear operator with [ T ] B = B [T]_{\mathscr B}=B [T]B=B, then
T ϵ 2 i − 1 = ϵ 2 i , T ϵ 2 i = − ϵ 2 i − 1 , i = 1 , … , k T\epsilon_{2i-1}=\epsilon_{2i},\quad T\epsilon_{2i}=-\epsilon_{2i-1},\quad i=1,\dots,k Tϵ2i1=ϵ2i,Tϵ2i=ϵ2i1,i=1,,k
If we let α i = ϵ 2 i − 1 \alpha_i=\epsilon_{2i-1} αi=ϵ2i1 for i = 1 , … , k i=1,\dots,k i=1,,k and α i = ϵ 2 i − 2 k \alpha_i=\epsilon_{2i-2k} αi=ϵ2i2k for i = k + 1 , … , n i=k+1,\dots,n i=k+1,,n, then B ′ = { α 1 , … , α n } \mathscr B'=\{\alpha_1,\dots,\alpha_n\} B={α1,,αn} is a basis for V V V, and we can verify [ T ] B ′ = [ 0 − I I 0 ] [T]_{\mathscr B'}=\begin{bmatrix}0&-I\\I&0\end{bmatrix} [T]B=[0II0], which means B B B is similar to [ 0 − I I 0 ] \begin{bmatrix}0&-I\\I&0\end{bmatrix} [0II0] and so is A A A.

17.Let T T T be a linear operator on a finite-dimensional vector space V V V. Suppose that
( a ) the minimal polynomial for T T T is a power of an irreducible polynomial;
( b ) the minimal polynomial is equal to the characteristic polynomial.
Show that no non-trivial T T T-invariant subspace has a complementary T T T-invariant subspace.
Solution: Let W W W be a non-trivial T T T-invariant subspace of V V V, assume there is W ′ W' W which is T T T-invariant such that W ⊕ W ′ = V W\oplus W'=V WW=V, let T W = U T_W=U TW=U and T W ′ = U ′ T_{W'}=U' TW=U, then the minimal polynomial p p p of T W T_W TW and p ′ p' p of T W ′ T_{W'} TW divide the minimal polynomial for T T T. Since the minimal polynomial for T T T is of the form q n q^n qn where q q q is irreducible, we have p = q r p=q^r p=qr and p ′ = q s p'=q^s p=qs, where r + s ≤ n r+s\leq n r+sn. As W W W is non-trivial, we have r ≥ 1 r\geq 1 r1.
Now if s ≥ 1 s\geq 1 s1, we can get a contradiction by the following procedure: from (b) we know that T T T has a cyclic vector α \alpha α such that the T T T-annihilator of α \alpha α is q n q^n qn, and there is α 1 ∈ W , α 2 ∈ W ′ \alpha_1\in W,\alpha_2\in W' α1W,α2W such that α = α 1 + α 2 \alpha=\alpha_1+\alpha_2 α=α1+α2, we let k = max ⁡ ( r , s ) k=\max(r,s) k=max(r,s), then 1 ≤ k < n 1\leq k<n 1k<n, and q k ( T ) α 1 = q k ( T ) α 2 = 0 q^k(T)\alpha_1=q^k(T)\alpha_2=0 qk(T)α1=qk(T)α2=0, which means q k ( T ) α = 0 q^k(T)\alpha=0 qk(T)α=0, this is a contradiction.
Thus s = 0 s=0 s=0, or the minimal polynomial for T W ′ T_{W'} TW is 1 1 1, which means

18.If T T T is a diagonalizable linear operator, then every T T T-invariant subspace has a complementary T T T-invariant subspace.
Solution: T T T is diagonalizable means if we let c 1 , … , c k c_1,\dots,c_k c1,,ck be distinct characteristic values of T T T and let V i = null  ( T − c i I ) V_i=\text{null }(T-c_iI) Vi=null (TciI), then
V = V 1 ⊕ ⋯ ⊕ V k V=V_1\oplus\cdots\oplus V_k V=V1Vk
Let W W W be a T T T-invariant subspace of V V V, then by Exercise 10 of Section 6.8, we have
W = ( W ∩ V 1 ) ⊕ ⋯ ⊕ ( W ∩ V k ) W=(W\cap V_1)\oplus\cdots\oplus(W\cap V_k) W=(WV1)(WVk)
Consider W ∩ V i W\cap V_i WVi, for any β ∈ W ∩ V i \beta\in W\cap V_i βWVi, we have β ∈ V i \beta\in V_i βVi, so T β = c i β T\beta=c_i\beta Tβ=ciβ. Since W ∩ V i W\cap V_i WVi is a subspace, we can find { α 1 , … , α r i } \{\alpha_1,\dots,\alpha_{r_i}\} {α1,,αri} to be a basis for it, then it can be extended to a basis for V i V_i Vi, namely { α 1 , … , α s i } \{\alpha_1,\dots,\alpha_{s_i}\} {α1,,αsi}, all of which are characteristic vectors associated with c i c_i ci. Let U i U_i Ui be the space spanned by { α r i + 1 , … , α s i } \{\alpha_{{r_i}+1},\dots,\alpha_{s_i}\} {αri+1,,αsi}, then ( W ∩ V i ) ⊕ U i = V i (W\cap V_i)\oplus U_i=V_i (WVi)Ui=Vi, let U = U 1 ⊕ ⋯ ⊕ U k U=U_1\oplus\cdots\oplus U_k U=U1Uk, we see that V = W ⊕ U V=W\oplus U V=WU, and as each U i U_i Ui is invariant under T T T, so is U U U.

19.Let T T T be a linear operator on the finite-dimensional space V V V. Prove that T T T has a cyclic vector if and only if the following is true: Every linear operator U U U which commutes with T T T is a polynomial in T T T.
Solution: First suppose α \alpha α is a cyclic vector of T T T, then if dim ⁡ V = n \dim V=n dimV=n, we have { α , T α , … , T n − 1 α } \{\alpha,T\alpha,\dots,T^{n-1}\alpha\} {α,Tα,,Tn1α} being a basis for V V V. Given an operator U U U which commutes with T T T, we have
U α = a 0 α + ⋯ + a n − 1 T n − 1 α = f ( T ) α U\alpha=a_0\alpha+\cdots+a_{n-1}T^{n-1}\alpha=f(T)\alpha Uα=a0α++an1Tn1α=f(T)α
where f ( x ) = a 0 + a 1 x + ⋯ + a n − 1 x n − 1 f(x)=a_0+a_1x+\cdots+a_{n-1}x^{n-1} f(x)=a0+a1x++an1xn1, notice that U T k α = T k U α = T k f ( T ) α = f ( T ) T k α , k = 2 , … n − 1 UT^k\alpha=T^kU\alpha=T^kf(T)\alpha=f(T)T^k\alpha,\quad k=2,\dots n-1 UTkα=TkUα=Tkf(T)α=f(T)Tkα,k=2,n1
We can see that U = f ( T ) U=f(T) U=f(T) on a basis for V V V, thus on V V V.
Conversely, if every linear operator U U U which commutes with T T T is a polynomial in T T T, let the cyclic decomposition of V V V by T T T be
V = Z ( α 1 ; T ) ⊕ ⋯ ⊕ Z ( α r ; T ) V=Z(\alpha_1;T)\oplus\cdots\oplus Z(\alpha_r;T) V=Z(α1;T)Z(αr;T)
and p i p_i pi is the T T T-annihilator for α i \alpha_i αi with p i + 1 ∣ p i p_{i+1}|p_i pi+1pi. Define U U U as follows: U α = 0 U\alpha=0 Uα=0 if α ∈ Z ( α 1 ; T ) \alpha\in Z(\alpha_1;T) αZ(α1;T) and U α = α U\alpha=\alpha Uα=α if α ∈ Z ( α i ; T ) , i = 2 , … , r \alpha\in Z(\alpha_i;T),i=2,\dots,r αZ(αi;T),i=2,,r. For any β ∈ V \beta\in V βV, we have β = β 1 + ⋯ + β r \beta=\beta_1+\cdots+\beta_r β=β1++βr where each β i ∈ Z ( α i ; T ) \beta_i\in Z(\alpha_i;T) βiZ(αi;T), so
U T β = U ( T β 1 + T β 2 + ⋯ + T β r ) = U ( T β 2 + ⋯ + T β r ) = T ( β 2 + ⋯ + β r ) = T ( U β 2 + ⋯ + U β r ) = T U β \begin{aligned}UT\beta&=U(T\beta_1+T\beta_2+\cdots+T\beta_r)=U(T\beta_2+\cdots+T\beta_r)\\&=T(\beta_2+\cdots+\beta_r)=T(U\beta_2+\cdots+U\beta_r)=TU\beta\end{aligned} UTβ=U(Tβ1+Tβ2++Tβr)=U(Tβ2++Tβr)=T(β2++βr)=T(Uβ2++Uβr)=TUβ
Then U U U commutes with T T T, thus is a polynomial for T T T. Let U = q ( T ) U=q(T) U=q(T), since q ( T ) α 1 = 0 q(T)\alpha_1=0 q(T)α1=0, we know p 1 ∣ q p_1|q p1q, which means p i ∣ q p_i|q piq for i ≥ 2 i\geq 2 i2, so α i = U α i = q ( T ) α i = 0 \alpha_i=U\alpha_i=q(T)\alpha_i=0 αi=Uαi=q(T)αi=0 for i ≥ 2 i\geq 2 i2, which means Z ( α i ; T ) = { 0 } Z(\alpha_i;T)=\{0\} Z(αi;T)={0} for i ≥ 2 i\geq 2 i2, so V = Z ( α 1 ; T ) V=Z(\alpha_1;T) V=Z(α1;T) and T T T has a cyclic vector.

20.Let V V V be a finite-dimensional vector space over the field F F F, and let T T T be a linear operator on V V V. We ask when it is true that every non-zero vector in V V V is a cyclic vector for T T T. Prove that this is the case if and only if the characteristic polynomial for T T T is irreducible over F F F.
Solution: Let dim ⁡ V = n \dim V=n dimV=n. First suppose the characteristic polynomial f f f for T T T is irreducible over F F F, then by the Generalized Cayley-Hamiltion Theorem, the minimal polynomial p p p for T T T is equal to f f f and irreducible over F F F. For any nonzero vector α ∈ V \alpha\in V αV, if α , T α , … , T n − 1 α \alpha,T\alpha,\dots,T^{n-1}\alpha α,Tα,,Tn1α is linearly dependent, then there is g ∈ F [ x ] g\in F[x] gF[x] with deg ⁡ g < n \deg g<n degg<n such that g ( T ) α = 0 g(T)\alpha=0 g(T)α=0, let p α p_{\alpha} pα be the T T T-annihilator for α \alpha α, then p α ∣ g p_{\alpha}|g pαg and deg ⁡ p α < n \deg p_{\alpha}<n degpα<n, further from α ≠ 0 \alpha\neq 0 α=0 we know deg ⁡ p α > 1 \deg p_{\alpha}>1 degpα>1, since p ( T ) α = 0 p(T)\alpha=0 p(T)α=0, we see p α ∣ p p_{\alpha}|p pαp, a contradiction to p p p being irreducible. Then it means α , T α , … , T n − 1 α \alpha,T\alpha,\dots,T^{n-1}\alpha α,Tα,,Tn1α is linearly independent, or Z ( α ; T ) = V Z(\alpha;T)=V Z(α;T)=V.
Conversely, if every non-zero vector in V V V is a cyclic vector for T T T, and assume the characteristic polynomial f f f for T T T is not irreducible over F F F, then if the minimal polynomial p p p for T T T is not the same as f f f, we have deg ⁡ p < deg ⁡ f = n \deg p<\deg f=n degp<degf=n, there is a vector α ∈ V \alpha\in V αV such that the T T T-annihilator for α \alpha α is p p p, so Z ( α ; T ) Z(\alpha;T) Z(α;T) has dimension deg ⁡ p \deg p degp, which means α \alpha α is not a cyclic vector for T T T.
If p = f p=f p=f and f = g h f=gh f=gh where deg ⁡ g ≥ 1 , deg ⁡ h ≥ 1 \deg g\geq1,\deg h\geq1 degg1,degh1, then it is apparent deg ⁡ h < n \deg h<n degh<n, let h = h 0 + h 1 x + ⋯ + x k h=h_0+h_1x+\cdots+x^k h=h0+h1x++xk, there is a vector α ∈ V \alpha\in V αV such that the T T T-annihilator for α \alpha α is p p p, thus g ( T ) h ( T ) α = 0 g(T)h(T)\alpha=0 g(T)h(T)α=0, and β = g ( T ) α ≠ 0 \beta=g(T)\alpha\neq 0 β=g(T)α=0. Notice that
h 0 β + h 1 T β + ⋯ + T k β = h ( T ) β = h ( T ) g ( T ) α = 0 h_0\beta+h_1T\beta+\cdots+T^k\beta=h(T)\beta=h(T)g(T)\alpha=0 h0β+h1Tβ++Tkβ=h(T)β=h(T)g(T)α=0
this shows β , T β , … , T k β \beta,T\beta,\dots,T^k\beta β,Tβ,,Tkβ is linearly dependent, thus by Theorem 1, dim ⁡ Z ( β ; T ) ≤ k = deg ⁡ h < n \dim Z(\beta;T)\leq k=\deg h<n dimZ(β;T)k=degh<n, so β \beta β is not a cyclic vector for T T T.

21.Let A A A be an n × n n\times n n×n matrix with real entries. Let T T T be the linear operator on R n R^n Rn which is represented by A A A in the standard ordered basis, and let U U U be the linear operator on C n C^n Cn which is represented by A A A in the standard ordered basis. Use the result of Exercise 20 to prove the following: If the only subspace invariant under T T T are R n R^n Rn and the zero subspace, then U U U is diagonalizable.
Solution: Since A A A is real, the characteristic polynomial for T T T and U U U are equal, both are f = det ⁡ ( x I − A ) f=\det(xI-A) f=det(xIA). Now given any nonzero vector α ∈ R n \alpha\in R^n αRn, the cyclic space Z ( α ; T ) Z(\alpha;T) Z(α;T) must be R n R^n Rn since it is invariant under T T T and contains α \alpha α, so by Exercise 20, f f f is irreducible over R R R, which means f f f must be of the form x − c x-c xc or x 2 + d x^2+d x2+d where d > 0 d>0 d>0. Then in the field C C C, f f f can be factored into prime factors and thus U U U is diagonalizable.

更多推荐

7.2 Cyclic Decompositions and the Rational Form