Proof of the determinant of a product of square matrices using Leibniz’s equation

Let us consider the square, n dimensional matixA.

(1)   \begin{equation*} \begin{pmatrix} a_{11} & a_{12} & ... & ... & a_{1n} \\ a_{21}& a_{22} & ... & ... & a_{2n} \\ .& & . & &. \\ . & & & . &. \\ a_{n1}&... & ...& ... & a_{nn} \end{pmatrix} \end{equation*}

It is possible to re-express the matrix in terms of a vector containing its rows.

(2)    \begin{equation*} \begin{pmatrix} a_{11} & a_{12} & ... & ... & a_{1n} \\ a_{21}& a_{22} & ... & ... & a_{2n} \\ .& & . & &. \\ . & & & . &. \\ a_{n1}&... & ...& ... & a_{nn} \end{pmatrix} = \begin{pmatrix} a_{1}\\ a_{2}\\ .\\ .\\ a_{n} \end{pmatrix} \end{equation*}

Where a_{1}= \begin{pmatrix} a_{11}&a_{12} &... & a_{1n} \end{pmatrix}.

It is then possible to express the product of the matrix $A$, with another $n$ dimensional square matrix $B$ as the following, by the definition of matrix multiplication.

(3)    \begin{equation*} A B = \begin{pmatrix} a_{1}B\\ a_{2}B\\ .\\ .\\ a_{n}B \end{pmatrix} \end{equation*}

Each row of the matrix $A$ can also be represented by the following sum, where $E_{i}$ represents the column with index $i$ of the $n$ dimensional identity matrix. This is because multiplication by the relevant column of the identity matrix gives a sum of vectors with elements uniquely in the same column as the original matrix.

(4)    \begin{equation*} a_{\gamma}= \sum_{1}^{n} a_{\gamma i}E_{i} \end{equation*}

Therefore the product of the two square matrices can be expressed as follows.

(5)    \begin{equation*} AB = \begin{pmatrix} \sum_{1}^{n} a_{1i_{1}}E_{i_{1}}B\\ \sum_{1}^{n} a_{2i_{2}}E_{i_{2}}B\\ .\\ .\\ \sum_{1}^{n} a_{ni_{n}}E_{i_{n}}B \end{pmatrix} \end{equation*}

So the following determinant must be evaluated.

(6)    \begin{equation*} \left | AB \right |= \begin{vmatrix} \sum_{1}^{n} a_{1i_{1}}E_{i_{1}}B\\ \sum_{1}^{n} a_{2i_{2}}E_{i_{2}}B\\ .\\ .\\ \sum_{1}^{n} a_{ni_{n}}E_{i_{n}}B \end{vmatrix} \end{equation*}

This can be simplified by applying the fact that multiplying a row/column by a number, the determinant will be multiplied by the same number. This is repeated for each row until the following is reached.

(7)    \begin{equation*} \left | AB \right |= \sum_{i_{1}=1}^{n} ... \sum_{i_{n}=1}^{n} a_{1i_{1}}a_{2i_{2}}...a_{ni_{n}} \begin{vmatrix} E_{i_{1}}B\\ E_{i_{2}}B\\ .\\ .\\ E_{i_{n}}B \end{vmatrix} \end{equation*}

Like in the derivation of the Leibniz equation, we consider the case when $i_{j}=i_{k}$. These cases correspond to the determinant in the sum above being zero, as there would be two identical rows. The only way that the sum in equation $7$ is non-zero, is that $(i_{1}, i_{2}, ... i_{n})$ are the permutations of a set of $n$ numbers. Let us call an element in the set of permutations of a set of $n$ numbers $(S_{n}) \ \pi(n)$. Thus the determinant can be further simplified.

(8)    \begin{equation*} \left | AB \right |= \sum_{\pi \in S_{n}} a_{1\pi(1)}a_{2\pi(2)}...a_{n\pi(n)} \begin{vmatrix} E_{\pi(1)}B\\ E_{\pi(2)}B\\ .\\ .\\ E_{\pi(n)}B \end{vmatrix} \end{equation*}

The determinant on the right hand side of the equation is now the determinant of the matrix $B$ with its rows permuted. To get back to the matix $B$ transpositions must be made, where each transposition changes the determinant by a factor of -1. The function $sgn(\pi)$ determines whether a given permutation corresponds to an even or odd number of transpositions, assigning them $\pm 1$ accordingly (in 3D, $+1$ corresponds to a right handed set). Thus these must be summed over all possible permutations in order to get back to the matrix $B$. Therefore the determinant is left in the following form.

(9)    \begin{equation*} \left | AB \right |= \sum_{\pi \in S_{n}} sgn(\pi) a_{1\pi(1)}a_{2\pi(2)}...a_{n\pi(n)} \begin{vmatrix} B \end{vmatrix} \end{equation*}

Which we recognise as the determinant of the matrix $A$ using Leibniz’s equation multiplied by the determinant of $B$.

(10)    \begin{equation*} \left | AB \right |= \sum_{\pi \in S_{n}} sgn(\pi) a_{1\pi(1)}a_{2\pi(2)}...a_{n\pi(n)} \begin{vmatrix} B \end{vmatrix} = \left | A\right| \left | B \right| \end{equation*}

Comments 4

Leave a Reply