The computational effort of evaluating determinants using the cofactor method grows enormously with the size of the matrix. To see this, let us analyze just what it takes to expand a matrix of size \((n,n)\) along the first column.
By the time we are done, we have added
\[n\cdot (n-1)\cdot (n-2) \cdots 2\cdot 1 = n!\]numbers, each of which is a product of \(n\) numbers. In all, there are \(n\cdot (n!)\) operations to perform.
To get an idea of how large the \(n\cdot (n!)\) is, let us work out the case \(n=25\): computing the determinant of a matrix of size \((25,25)\) by expansion along columns requires approximately
\[25\cdot (25!) \approx 387,780,251,100,000,000,000,000,000\]arithmetical operations. A computer that is able to perform \(10^{19}\) arithmetical operations per second would take more than a full year to complete such a task. The fastest computer that exists today would take much longer than that.
In comparison, computing the determinant of an upper triangular matrix of size \((n,n)\) requires only \(n\) multiplications. So this is a significant improvement over \(n\cdot (n!)\) arithmetical operations.