On Sun, 9 Aug 2015, Oleg Goldshmidt wrote:

Shachar Shemesh <shac...@shemesh.biz> writes:

Hi all,

I'm looking for a tool/code to invert a matrix. So far, this sounds
trivial. I have one special requirement. I did not think it was too
special, except I could not find anywhere that supplied it.

I want the matrix to be over a different field (i.e. - not the real
numbers). In my particular case, I want it to be over Z2 (zero and
one).

If you don't find ready code writing your own should be pretty
simple for Z2 - fields don't get any simpler. ;-)

In general, if you have a matrix A, then its inverse is given by

A^(-1) = adj(A)/det(A)               [1]

The determinant of any matrix over Z2 is either 0 or 1. If it is 0 the
matrix is not invertible. If it is 1, then the inverse matrix is the
same as the adjugate of the original matrix.

Now, the adjugate is the transpose of the cofactor matrix, which is

C_ij = (-1)^(i+j)*M_ij                    [2]

where M_ij is A's (ij)-minor, i.e., the determinant of the matrix
derived by deleting the i-th row and the j-th column of A. Thus

adj(A) = C_ji = (-1)^(i+j)*M_ji           [3]

Thus, from equations 1 and 3 we have

{A^(-1)}_ij = (-1)^(i+j)*M_ji             [4]

There are probably library routines to compute minors and cofactors. If
you don't find any, then, given that all you have is zeroes and ones, it
should be easy and safe to write your own. Note that you need to compute
the determinant to determine whether or not your matrix is invertible,
and to compute the determinant you need the cofactors, since

det(A) = Sum_{i=1}^{n} (A_ij*C_ij)   [5]

So, compute det(A), keep the cofactors, transpose (Eq. [3]) - done. Use

The last line is wrong. The previous are mathematically correct, but computationally wrong.

When we compute det(A) using [5], we only calculate the minors for the first line, so we only know the minors for the first line, we still need to calculate them for the rest of the lines.

The calculation as suggested above will run in O(n!) steps, this will obviously stop working for matrices of size 15x15 or there about.

The naive algorithm, taught to any engineering or science student in the first linear algebra course, is Gauss elimination (also known as LU decomposition in this context). It runs in O(n^3) steps.

Note that this algorithm is used very similarly for both inverting a matrix and calculating the determinant. So even if we replace step [5] in your suggestion, with Gaussian elimination, we still get O(n^5), when our code actually has all necessary components for running in O(n^3) instead.

This is why when teaching about the classical adjoint forumula for inverse ([1]), or equivalently, the Cramer's rule for solving linear equations, it is very important to impress upon the students that while it appears "simpler" than Gauss elimination, it is in fact a lot less efficient.

As a side note, the best current algorithm runs in O(n^2.3727) (at least according to wikipedia), so if you care a lot about performance, you can still save some computation time by doing more research.


--
Matan.


_______________________________________________
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

Reply via email to