Rank of a Matrix

Page Contents

Rank (Definition)

Linear Maps

The #{~{rank}} of a linear map _ &phi.#: V_{~n} &hdash.&rightarrow. V_{~m} , _ is defined as: _ _ rank &phi. _ = _ dim (im &phi.).

Suppose rank &phi. = _ dim (im &phi.) = &rho. _ then _ dim (ker &phi.) = ~n &minus. &rho. _ We can rearrange a basis \{#{~u}_{~i}\} of V_{~n} giving #{~u}_{1} ... #{~u}_{&rho.},#{~u}_{&rho.+1} ... #{~u}_{~n} such that #{~u}_{&rho.+1} ... #{~u}_{~n} is a basis for ker &phi.. Then &phi.#{~u}_{1} ... &phi.#{~u}_{&rho.} is a basis for im &phi..

Let _ #{~v}_{~i} = &phi.#{~u}_{~i} , _ ~i = 1 ... &rho., _ then we can complete the basis for V_{~m} by choosing _ #{~v}_{&rho.+1} ... #{~v}_{~m} .

Let &phi. have associated matrix C relative to the bases \{#{~u}_{~i}\} and \{#{~v}_{~j}\}. We have

&phi.#{~u}_{~i} _ _ = _ _ array{ #~v_~i, _ = _ &sum._~j &delta._{~i~j} #~v_~j, _ if ~i =< &rho. _ , (Kroeneker delta)/ #0, _ = _ &sum._~j 0 #~v_~j, _ if ~i > &rho. _ }

I.e.

C _ = _ matrix{I_{&rho.},0/0,0} .

Matrices

Let A be a ~m&times~n matrix, which induces the a linear map _ &phi.#: V_{~n} &hdash.&rightarrow. V_{~m} , _ with respect to bases \{#{~u}_{~i}\} and \{#{~w}_{~j}\}.

V_~n
\{#{~u'_{~i}}\}
zDgrmRight{i_{~n},N} V_{~n}
\{#{~u_{~i}}\}
zDgrmDown{&phi.,C}   zDgrmDown{&phi.,A}
V_~m
\{#{~v_{~i}}\}
zDgrmLeft{i_{~m},M} V_{~m}
\{#~w_~i\}
  We can rearrange \{#{~u}_{~i}\} as \{#{~u'}_{~i}\} and define \{#{~v}_{~i}\} as above, so that &phi. has associated matrix C releative to the new bases:
_

C _ = _ matrix{I_{&rho.},0/0,0} .


Then A &tilde. C, i.e. &exist. regular M and N such that C = MAN _ (see diagram).

Suppose we had initially chosen different bases \{#{~x}_{~i}\}, \{#{~y}_{~j}\} say, for V_{~n} and V_{~m} respectively.
Then A would induce a different linear map _ &psi.#: V_{~n} &hdash.&rightarrow. V_{~m} . _ We will show that rank &psi. = rank &phi..

V_{~n}
\{#{~u_{~i}}\}
  V_{~n}
\{#{~x_{~i}}\}
==== V_{~n}
\{#{~x_{~i}}\}
zDgrmDown{&phi.,A}   zDgrmDown{&chi.,A}   zDgrmDown{&psi.,A}
V_{~m}
\{#{~w_{~i}}\}
_ ==== _ V_{~m}
\{#{~w_{~i}}\}
  V_{~m}
\{#{~y_{~i}}\}
  We will take this in two steps, as illustrated in the diagram on the left.
First consider the map &chi. induced by A relative to bases \{#{~x}_{~i}\}, \{#{~w}_{~i}\} .
#{~w} &in. im &phi. &imply. #{~w} = &phi.#{~u} _ where #{~u} = &sum._{~i} λ_{~i}#{~u}_{~i} say &imply. #{~w} = &sum._{~i}&sum._{~j} a_{~i~j}λ_{~j} #{~w}_{~i} &imply. #{~w} = &chi.#{~x}, where #{~x} = &sum._{~i} λ_{~i} #{~x}_{~i} &imply. #{~w} &in. im &chi.. _ And conversely. _ So _ im &phi. = im &chi. _ and _ rank &phi. = rank &chi..
Next consider the map &psi. induced by A relative to bases \{#{~x}_{~i}\}, \{#{~y}_{~i}\} .
#{~x} &in. ker &chi., _ #{~x} = &sum._{~i} &mu._{~i} #{~x}_{~i}, _ &chi.#{~x} = #0 &imply. &sum._{~i} &sum._{~j} a_{~i~j}&mu._{~j}#{~w}_{~i} = #0 &imply. &psi.#{~x} = 0 . _ And conversely. _ So _ ker &chi. = ker &psi. _ and _ rank &chi. = rank &psi..

So all the linear maps induced by a given matrix A have the same rank. This justifies the following definition:

The #{~{rank}} of a matrix A is dim (im &phi.), where &phi. is any linear map induced by A.

Equivalence

If two matrices (of the same dimension) are equivalent, then they have the same rank.
Conversely if two matrices (of the same dimension) have the same rank then they are equivalent.

A &tilde. B and rank A = &rho. &imply. A &tilde. C (as defined above) &imply. B &tilde. C (by equivalence) &imply. rank B = &rho. .
rank A = rank B = &rho. &imply. A &tilde. C and B &tilde. C &imply. B &tilde. A .



Transpose of a Matrix

A = (~a_{~i~j} )_{~i = 1 ... ~m, ~j = 1 ... ~n} is a ~m # ~n matrix. The ~n # ~m matrix A^T = (~a^T_{~j~i} )_{~j = 1 ... ~n, ~i = 1 ... ~m} where ~a^T_{~j~i} = ~a_{~i~j}, _ is called the #{~{transpose}} of A.

Example:

A _ = _ zEval{wMatA.McPrint()} _ _ _ _ _ _ A^T _ = _ zEval{wMatA.Trans().McPrint()}

#{Lemma}: If matrices A and B are conformable for multiplication, i.e. AB exists say, then (BA)^T = B^TA^T.

Proof:
Suppose A = (~a_{~i~j} ) is ~m # ~n _ and _ B = (~b_{~k~l} ) is ~n # ~p, then
AB = (&sum._{~j} ~a_{~i~j} ~b_{~j~l} )_{~i, ~l}
AB^T = (&sum._{~j} ~a_{~i~j} ~b_{~j~l} )_{~l, ~i} = (&sum._{~j} ~b_{~j~l} ~a_{~i~j} )_{~l, ~i} = (&sum._{~j} ~b^T_{~l~j} ~a^T_{~j~i} )_{~l, ~i} = B^TA^T.

Example:

A _ = _ zEval{wMatA.McPrint()} _ _ _ _ _ _ B _ = _ zEval{wMatB.McPrint()}

AB^T _ = _ script{rndb{zEval{wMatA.McPrint()} zEval{wMatB.McPrint()}},,,T,}

_ _ _ _ _ _ _ = _ script{zEval{wMatA.Mult(wMatB).McPrint()},,,T,} _ = _ zEval{wMatA.Mult(wMatB).Trans().McPrint()}

B^TA^T _ = _ zEval{wMatB.Trans().McPrint()} zEval{wMatA.Trans().McPrint()} _ = _ zEval{wMatB.Trans().Mult(wMatA.Trans()).McPrint()}

The examples on this page have been written using MathymaMatrix , a JavaScript module for doing Matrix algebra on a Web page. If you have any matrix calculation problems, why not take a look?

Rank of Transpose

If M regular ~m # ~m matrix, then M^T(M^{&minus.1})^T = (M^{&minus.1}M)^T = (I_{~n} )^T = I_{~n} .
So M^T is regular and (M^{&minus.1})^T = (M^T)^{&minus.1}.

If rank A = &rho., i.e.

MAN _ = _ C _ = _ matrix{I_{&rho.},0/0,0} _ _ _ _ _ M, N regular,


then clearly C^T = C, _ N^TA^TM^T = (MAN)^T = C^T = C. _ N^T and M^T are regular (by above) so _ rank A^T = &rho. = rank A.