Are Men Innately Better than Women at Mathematics?
Labels: mathematics
Labels: mathematics
X X
XXXX XXX X
Y
YYY
Y
Y
YYY
YX X
XXXX XXX X
It turns out that you can get one from the other from a single matrix 'multiplication' - the catch is that we have to work over the semiring (R union infinity,min,+) instead of the more usual semiring (well, actually field) (R,+,*).
Let hn be the initial height in the nth column and hn' be the height after dropping the new piece. Think about how low the new piece made of Ys can fall. If the left most column of the Ys is in contact with the X's then it's lowest point ends up at height h2+1. If he middle column is in contact then the lowest point ends up at h3. And if the right column is in contact the lowest point ends up at height h4. So the lowest point ends up at height min(h2+1,h3,h4). So we can now write down expressions for the new height: h2' = min(h2+1,h3,h4)+1, h3' = min(h2+1,h3,h4)+1, h4' = min(h2+1,h3,h4)+2. We can rewrite this as:
h2' = min(h2+2,h3+1,h4+1)
h3' = min(h2+2,h3+1,h4+1)
h4' = min(h2+3,h3+2,h4+2)
h2' = 2(*)h2 (+) 1(*)h3 (+) (1)*(h4)
h3' = 2(*)h2 (+) 1(*)h3 (+) (1)*(h4)
h4' = 3(*)h2 (+) 2(*)h3 (+) (2)*(h4)
( 0 OO OO OO ...)
( OO 2 1 1 ...)
( OO 2 1 1 ...)
( OO 3 2 2 ...)
( ... ... ...)
( ... 0 )
Turns out this has all types of applications - for example resource allocation in a queing system. I'll leave you to figure out how falling testris pieces might represent jobs submitted to a queing system and how the columns might represent types of resource.
Here's an easy exercise: If M is the matrix of a piece, what piece is the transpose MT the matrix of?
I learnt this from this paper by Gaubert and Max Plus. (I think Max Plus is a relative of Bourbaki.)
Many algorithms are 'linear' in this semiring. Another example that seems quite different is the dynamic programming algorithm for approximate DNA string matching. I tried to work out what the transpose matrix did (ie. what the adjoint algorithm was) and then use the code reversal method I mentioned earlier to see what new algorithm I obtained was. It was another approximate string matcher but this time it worked through the strings in the opposite direction. (Note how each operation in this description is essentially of the form a = max(some terms added to constants).)
Labels: mathematics
Labels: mathematics
The theta function of a lattice is simply the sum of xu.u for all u in the lattice. Clearly it is a formal power series with integer coefficients if the lattice is such that u.u is always an integer. These are often pretty amazing functions, for example for certain types of lattice they are modular forms. It turns out that the theta function for the Leech lattice is a power of 24. More amazingly: this is a new result!
It's clear that the theta function is the generating function for the number of lattice elements of a given length. So if it's a 24th power you'd expect this to have a combinatorial explanation too. Unfortunately the proof isn't combinatorial. So even though the 24th root appears to be counting something, nobody knows what it is that is being counted!
Anyway, Sloane and friends had fun trawling through their sequence database trying to find other sequences that were powers. Another interesting one they found was the exponential generating function for the number of partitions of {1,...,n} into an odd number of ordered subsets. It turns out this is a perfect square. This is begging for a combinatorial explanation but it hasn't yet been found. I'll have to see if I can reproduce the square root of this series with my (unfinished) Haskell code for formal power series.
And of course this paper is interesting to me because of the prominent role of the number 24 :-)
Labels: mathematics
Labels: mathematics
OK, I will explain. Consider the line of code, in C++, a += b
. We can write this in matrix form as
(a) = (1 1) (a)
(b) (0 1) (b)
(a) = (1 0) (a)
(b) (1 1) (b)
b += a
. More generaly, any block of code that performs linear updates to variables can be converted to its adjoint by writing the steps backwards and transforming each line of code like this. A more complex example (exercise) is a = 2*a+3*b-4*c; b = 2*a
which maps to a += 2*b; b = 0; b += 3*a; c -= 4*a; a *= 2
. If your code has loops, eg. iterates over arrays of variables, you just run the loops backwards. The application for me is that there is a published fast algorithm for some linear operator but I need the adjoint. Surprisingly the adjoint of the fast algorithm can actually turn out to be a whole new fast algorithm.
Incidentally, I asked the journal editor for the expected turnaround time for the decision on whether to publish. He responded with the mean, standard deviation, extrema and quartiles. If only people like plumbers, doctors and cable installers could respond in this way. "If I attempt to fix this leak it has a 50% chance of taking 1 hour to fix, a 25% chance of taking 2 and there's a 5% chance I won't get it done today" rather than "I'm charging by the hour and it'll take as long as it takes unless there's a complication in which case it'll take longer".
Labels: mathematics
One of the earliest ways to construct the real numbers from the rationals was as an equivalence class of sequences of rational numbers. Two sequences are equivalent if the difference tends to zero. In Non-Standard Analysis, developed by Robinson, this definition is tweaked slightly so that the sequences are equivalent if they are equal on a 'large' subset of the sequence. The trick is to define 'large' appropriately leading to the concept of an ultrafilter. This results in a set *R which is an extension of the usual set of reals R but also contains infinitely small and large numbers. There's also a nice 'metatheorem' that goes with this construction showing that 'reasonable' theorems about *R are also theorems about R. This basically gives us license to use infinitesimals in our proofs and still be sure that our results make sense as propositions about real numbers. So many of Leibniz's old 'proofs' using infinitesimals, say, can now be reinterpreted as perfectly valid proofs.
Anyway, you don't need all this machinery to do automatic differentiation, just an algebraic structure with elements whose square is zero.
Incidentally, Gödel used ultrafilters to 'prove' the existence of God.
Labels: mathematics