Let us consider the square, dimensional matix.
It is possible to re-express the matrix in terms of a vector containing its rows.
It is then possible to express the product of the matrix , with another dimensional square matrix as the following, by the definition of matrix multiplication.
Each row of the matrix can also be represented by the following sum, where represents the column with index of the dimensional identity matrix. This is because multiplication by the relevant column of the identity matrix gives a sum of vectors with elements uniquely in the same column as the original matrix.
Therefore the product of the two square matrices can be expressed as follows.
So the following determinant must be evaluated.
This can be simplified by applying the fact that multiplying a row/column by a number, the determinant will be multiplied by the same number. This is repeated for each row until the following is reached.
Like in the derivation of the Leibniz equation, we consider the case when . These cases correspond to the determinant in the sum above being zero, as there would be two identical rows. The only way that the sum in equation is non-zero, is that are the permutations of a set of numbers. Let us call an element in the set of permutations of a set of numbers . Thus the determinant can be further simplified.
The determinant on the right hand side of the equation is now the determinant of the matrix with its rows permuted. To get back to the matix transpositions must be made, where each transposition changes the determinant by a factor of . The function determines whether a given permutation corresponds to an even or odd number of transpositions, assigning them accordingly (in 3D, corresponds to a right handed set). Thus these must be summed over all possible permutations in order to get back to the matrix . Therefore the determinant is left in the following form.
Which we recognise as the determinant of the matrix using Leibniz’s equation multiplied by the determinant of .