Practical Limitations of the Cofactor Method

The computational effort of evaluating determinants using the cofactor method grows enormously with the size of the matrix. To see this, let us analyze just what it takes to expand a matrix of size \((n,n)\) along the first column.

  1. First we get \(n\) summands each of which is a product with one factor being the determinant of a matrix of size \((n-1,n-1)\).
  2. Next we need to expand each of these \(n\) determinants as a sum of its \((n-1)\) cofactors. The result are \(n(n-1)\) summands each of which is a product with a determinant of size \((n-2,n-2)\).
  3. ... and so on, and so on.

By the time we are done, we have added

\[n\cdot (n-1)\cdot (n-2) \cdots 2\cdot 1 = n!\]

numbers, each of which is a product of \(n\) numbers. In all, there are \(n\cdot (n!)\) operations to perform.

To get an idea of how large the \(n\cdot (n!)\) is, let us work out the case \(n=25\): computing the determinant of a matrix of size \((25,25)\) by expansion along columns requires approximately

\[25\cdot (25!) \approx 387,780,251,100,000,000,000,000,000\]

arithmetical operations. A computer that is able to perform \(10^{19}\) arithmetical operations per second would take more than a full year to complete such a task. The fastest computer that exists today would take much longer than that.

In comparison, computing the determinant of an upper triangular matrix of size \((n,n)\) requires only \(n\) multiplications. So this is a significant improvement over \(n\cdot (n!)\) arithmetical operations.