Saturday, July 31, 2010

Interesting property of the inverses of Magic Squares

I’m a nerd. I freely admit it. Sometimes I see a result in mathematics that I think is fun, even if it is not immediately useful. I intended for this blog to be about applications of linear algebra, and by that I mean useful, if not a little esoteric, applications; applications that are juicy and interesting and have good visuals. Although the abstract applications of linear algebra to theoretical areas of mathematics are useful to someone, they do not have the hands-on feeling that I want. However, the nerd in me finds some abstract ideas sexy enough to include in this blog. The following is one of them. Although there is a connection between this idea and Magic Squares, which have constant row sums, I don’t see an application right off. If you know of one, please let me know.

Theorem (already this post looks different than usual ‘cause it has a theorem):
If an invertible matrix A had constant row sums of k, then the inverse of A has constant row sums of 1/k.

Proof (Oh, no.  A proof.  Just when this blog looked like it was just fun stuff):
Let A be an m-by-m invertible matrix with constant row sums of k. Let B be the inverse of A.  Now, AB = I and the diagonal elements of I are all 1.  Note that I has a constant row sum of 1.  Hmmm.  Let

             (1)
be the elements of I.  Then
                  (2)
is the sum of the elements of row i of I, and
              (3)
is the sum of the elements of row n of A.  Now, write the elements of a row of I = BA as the sum of products of elements of A and B:
             (4)
Now, we're ready to put this all together.
Thus,

         (5)

but the right-hand side of (5) is the sum of row i of B

It seems like a trick, and in a way it is, but the trick is legit.  What we have done is started with the sum of row i of I, written it in terms of the elements of A and B, and then seen that we could factor out the elements of B leaving row sums of A.  To help you understand this, carefully write the elements of a general 3-by-3 BA in terms of the elements of B and A.  Now, sum one of the rows of BA and rearrange so you can factor out the various elements of B.  The sums of the elements of A left will each be a row sum.  This wouldn't be a proof in general, but it should help you understand what is happening in that sum-switching step of the proof, and that it works because the elements of a matrix product are sums and then we sum a row of sums, and the terms within these two sums can be conveniently rearranged.

Questions

  1. We started with an invertible A with constant row sum.  What if A had constant column sum of k instead of row sum?  Will the inverse of A have constant column sum of 1/k as well?  How would the proof go for that?  This would be a great exercise in working with sums and indices.

  2. This proof will not work if the constant row sum is k = 0.  Can a matrix have an inverse if k = 0?   If k is not zero, are we guaranteed that A is invertible?  Can we determine if a magic square is invertible by the row sum alone?

  3. Is there a relationship between the diagonal sums of a magic square and its inverse?

  4. Is the inverse of a magic square a magic square?  Is the inverse of a semi-magic square a semi-magic square?

  5. Is the square of a magic square a magic square?  Is the square of a semi-magic square a semi-magic square?  Cubes?  Fourth powers?
Reference:  Wilansky, Albert, "The row-sums of the inverse matrix,"  The American Mathematical Monthly, Vol. 58, No. 9. (Nov., 1951), pp. 614-615.

1 comment:

  1. Note that A(1,1,...,1) = (k,k,...,k). Then (1/k,1/k,...,1/k) = A^(-1)(1,1,...,1).

    ReplyDelete