Tag Archives: determinant

SL(2,IR) is the commutator subgroup of GL(2,IR)

Here is a proof of the above fact.

Let N be the commutator subgroup of the general linear group GL(2,\mathbb R); i.e.,

N=\langle ABA^{-1}B^{-1}:A,B\in GL(2,\mathbb R)\rangle.

First, it is clear that N is contained in the special linear group SL(2,\mathbb R), since \det(ABA^{-1}B^{-1})=1 for any A,B\in GL(2,\mathbb R). Next, we claim that N contains all matrices

\begin{pmatrix} 1 & b\\ 0 & 1\end{pmatrix}.

This follows from noting that

\begin{pmatrix} 1 & b\\ 0 & 1\end{pmatrix}=\begin{pmatrix} 1 & b\\ 0 & b\end{pmatrix}\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}\begin{pmatrix} 1 & b\\ 0 & b\end{pmatrix}^{-1}\begin{pmatrix} 1 & 1\\ 0 & 1\end{pmatrix}^{-1}.

By taking transposes, it also follows that N contains all matrices

\begin{pmatrix} 1 & 0\\ c & 1\end{pmatrix}.

Further, N contains all matrices

\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}

since

\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}=\begin{pmatrix} a & 0\\ 0 & 1\end{pmatrix}\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\begin{pmatrix} a & 0\\ 0 & 1\end{pmatrix}^{-1}\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}^{-1}

for any a\neq 0.

Now let

\begin{pmatrix} a & b\\ c & d\end{pmatrix}\in SL(2,\mathbb R).

Then ad-bc=1. Using the above results,

\begin{pmatrix} a & b\\ c & d\end{pmatrix}=\begin{pmatrix} 1 & 0\\ c/a & 1\end{pmatrix}\begin{pmatrix} 1 & ab\\ 0 & 1\end{pmatrix}\begin{pmatrix} a & 0\\ 0 & 1/a\end{pmatrix}\in N

if a\neq 0, and

\begin{pmatrix} a & b\\ c & d\end{pmatrix}=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}1&-\frac{d}{b}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ ab&1\end{pmatrix}\begin{pmatrix}1/b&0\\ 0&b\end{pmatrix}\in N

if b\neq 0, and the latter since

\begin{aligned}\begin{pmatrix} 0 & -1\\ 1 & 0\end{pmatrix}=&\begin{pmatrix}x&y\\0&-x-y\end{pmatrix}\begin{pmatrix}-x-y&0\\ x&y\end{pmatrix}\begin{pmatrix}x&y\\0&-x-y\end{pmatrix}^{-1}\begin{pmatrix}-x-y&0\\ x&y\end{pmatrix}^{-1}\\ \in &N\end{aligned}

for any x,y,x+y\neq 0. Thus SL(2,\mathbb R)\subseteq N, i.e., N=SL(2,\mathbb R).

Leave a comment

Filed under Linear algebra

Minkowski’s criterion

Here is a related post that I find interesting.

In linear algebra, Minkowski‘s criterion states the following.

Theorem (Minkowski’s Criterion). Let A be an n\times n matrix with real entries such that the diagonal entries are all positive, off diagonal entries are all negative, and the row sums are all positive. Then \det(A)\neq 0.

This is a nice criterion and is not very difficult to prove, but for a random matrix it is asking too much. To decide whether a matrix is singular one usually looks for a row/column consisting of zeros or adding up to zero. The following result gives sufficient conditions for this to work. Unfortunately, it does not generalise Minkowski’s result.

Theorem. Let A be a n\times n matrix with real entries such that its row sums are all \ge 0, its lower diagonal entries are \ge 0 and its upper diagonal entries are \le 0. Then \det(A)=0 if and only if A has either a row consisting entirely of zeros or all the row sums equal to zero.

Proof. Suppose that Ab=0, where b=(b_1,\dots,b_n)^T\neq 0. Assume that b_1\ge\cdots\ge b_n. Then there exists 1\le m<n such that a_{1,1}',\dots,a_{1,m}'\ge 0 and a_{1,m+1}',\dots,a_{1,n}'\le 0. Hence

\begin{aligned}0=\sum_{j=1}^na_{1,j}'b_j&\ge b_m\sum_{j=1}^ma_{1,j}'+b_{m+1}\sum_{j=m+1}^na_{1,j}\\&\ge (b_m-b_{m+1})\sum_{j=1}^ma_{1,j}'\ge 0.\end{aligned}

So we must have (i) b_1=\cdots=b_m, (ii) b_{m+1}=\cdots=b_n, (iii) a_{1,1}'+\cdots+a_{1,n}'=0 and (iv) either b_m=b_{m+1} or a_{1,1}'+\cdots+a_{1,m}'=0. These boil down to having either b_1=\cdots=b_n or a_{1,1}'=\cdots=a_{1,n}'=0. Apply this argument to each row of A to obtain the desired conclusion. \square

Leave a comment

Filed under Linear algebra

Some interesting linear algebra proofs

Below are some cute linear algebra results and proofs cherrypicked from various sources. All the standard hypotheses (on the base field, the size of the matrices, etc.) that make the claims valid are assumed. The list will likely be updated.

Fact 1. Let A,B,X be matrices. If AX=XB, then p(A)X=Xp(B) for any polynomial p.

Proof. We have A^2X=A(AX)=A(XB)=(AX)B=(XB)B=XB^2. By induction, A^nX=XB^n for any n\in\mathbb N. Hence the result follows. \square

Remark. Note that X need not be square, let alone invertible.

Fact 2. Let x_0,\dots,x_n be distinct. Then the Vandermonde matrix

\displaystyle V=\begin{pmatrix} 1 & x_0 & \cdots & x_0^n\\ 1 & x_1 & \cdots & x_1^n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \cdots & x_n^n\end{pmatrix}

is invertible.

Proof. It suffices to show that the kernel of the linear transformation f(x)=Vx is trivial. If a=(a_0,\dots,a_n) is in the kernel, then p(X)=a_0+a_1X+\cdots+a_nX^n=0 for each X\in\{x_0,\dots,x_n\}. Since \deg(p)=n, this forces p(X) to be identically zero. Thus a=0. \square

Fact 3. A matrix is diagonalisable iff its minimal polynomial decomposes into distinct linear factors.

Proof. A matrix is diagonalisable iff every Jordan block has size 1. Since the multiplicity of an eigenvalue in the minimal polynomial corresponds to the size of the largest Jordan block, the result follows. \square

Corollary. Idempotent matrices are diagonalisable. Moreover, the rank of an idempotent matrix is equal to the algebraic multiplicity of the eigenvalue 1.

Fact 4. If A^nx=0 and A^{n-1}x\neq 0, then x,Ax,\dots,A^{n-1}x are linearly independent.

Proof. Note that \ker(A^n)\supseteq\ker(A^{n-1}). Let x\in\ker(A^n)\setminus\ker(A^{n-1}) and suppose that a_0x+a_1Ax+\cdots+a_{n-1}A^{n-1}x=0. Multiplying both sides by A^i for i=n-1,\dots,1 shows that a_i=0 for all i, as desired. \square

Corollary 1. If \ker(A^n)\neq\ker(A^{n-1}), then \dim(\ker(A^j))\ge j for j=1,\dots,n.

Proof. If x\in\ker(A^n)\setminus\ker(A^{n-1}), then A^{n-1}x,\dots,A^{n-j}x\in\ker(A^j). \square

Corollary 2. If A is n\times n, and \ker(A^n)\neq\ker(A^{n-1}), then \dim(\ker(A^j))=j for each 0\le j\le n. In particular, A is similar to the nilpotent Jordan block of size n.

Fact 5. If f is linear on V, then V\cong\ker(f)\oplus f(V).

Proof. The short exact sequence

0\to\ker(f)\hookrightarrow V\twoheadrightarrow f(V)\to 0

is split. So the result follows by the splitting lemma. \square

Corollary. If W\subseteq V is a subspace, then V\cong W\oplus W^\perp.

Fact 6. If A is n\times n, then r(A)\ge n-k, where k is the algebraic multiplicity of the eigenvalue 0 of A.

Proof. Since the nullity of A is the geometric multiplicity of the eigenvalue 0, and the geometric multiplicity of an eigenvalue is at most its algebraic multiplicity, we get r(A)=n-n(A)\ge n-k. \square

Fact 7. The number of distinct eigenvalues of A is at most r(A)+1.

Proof. The rank of a matrix is the number of non-zero eigenvalues and the nullity is the number of zero eigenvalues, both counted with multiplicity. \square

Leave a comment

Filed under Linear algebra

Discriminants and lattices

Let K=\mathbb Q(\alpha) be a quadratic number field. For a,b\in K, recall that the discriminant \Delta(a,b) is defined as

\displaystyle \Delta(a,b):=\left|\begin{matrix} a^{(1)} & a^{(2)}\\ b^{(1)} & b^{(2)}\end{matrix}\right|^2

where a^{(1)}, a^{(2)} are the Galois conjugates of a and b^{(1)}, b^{(2)} are those of b. For any \beta\in K we define its discriminant to be \Delta(\beta):=\Delta(1,\beta).

Write a=a_1+a_2\alpha and b=b_1+b_2\alpha. Then

\left(\begin{matrix} a\\ b\end{matrix}\right)=\underbrace{\left(\begin{matrix} a_1 & a_2\\ b_1 & b_2\end{matrix}\right)}_{A}\left(\begin{matrix} 1\\\alpha\end{matrix}\right)

If \alpha,\bar\alpha are the Galois conjugates of \alpha, then

\Delta(a,b)=\left|\begin{matrix} a_1+a_2\alpha & a_1+a_2\bar\alpha\\ b_1+b_2\alpha & b_1+b_2\bar\alpha\end{matrix}\right|^2=\left|\begin{matrix} a_1 & a_2\\ b_1 & b_2\end{matrix}\right|^2\left|\begin{matrix} 1 & 1\\\alpha & \bar\alpha\end{matrix}\right|^2

\therefore\boxed{\Delta(a,b)=(\det A)^2\Delta(\alpha)}

Now suppose that \mathbb Z[\alpha]=a\mathbb Z+b\mathbb Z. Then \mathbb Z[\alpha] is spanned by \{a,b\}, so there are integers p,q,r,s such that

\underbrace{\left(\begin{matrix} p & q\\ r & s\end{matrix}\right)}_{M}\left(\begin{matrix} a\\ b\end{matrix}\right)=\left(\begin{matrix} 1\\\alpha\end{matrix}\right)

So we have

MA\left(\begin{matrix} 1\\\alpha\end{matrix}\right)=\left(\begin{matrix} 1\\\alpha\end{matrix}\right).

Lemma. If P is a 2\times 2 matrix with integer coefficients and w=(1,\alpha)^T with \alpha\not\in\mathbb Q, then Pw=w if and only if P=I, the 2\times 2 identity matrix.

Proof. This follows from the \mathbb Z-linear independence of \{1,\alpha\}. More concretely,

\underbrace{\left(\begin{matrix} s & t\\ u & v\end{matrix}\right)}_{P}\left(\begin{matrix}1\\\alpha\end{matrix}\right)=\left(\begin{matrix} 1\\\alpha\end{matrix}\right)\Rightarrow\begin{cases}s+t\alpha=1\\ u+v\alpha=\alpha\end{cases}

\therefore s=1,\ t=0,\ u=0,\ v=1\Rightarrow P=I. \square

Thus MA=I, so that \det(M)\det(A)=1. But \det(M) and \det(A) are integers. Hence |\det(M)|=|\det(A)|=1, i.e. \Delta(a,b)=\Delta(\alpha). Thus

Fact. \{a,b\}\subset\mathbb Z[\alpha] spans \mathbb Z[\alpha] if and only if \Delta(a,b)=\Delta(\alpha).

Note that all of the above arguments generalize to arbitrary number fields.

A nice corollary:

Corollary. (a,b) and (c,d) generate \mathbb Z^2 (as a group) if and only if

\left|\begin{matrix} a & b\\ c & d\end{matrix}\right|=\pm 1.

In other words, two bases generate the same lattice only if their fundamental parallelograms have equal areas.

1 Comment

Filed under Geometry, Linear algebra, Number theory