Friday, March 31, 2006

Quantum Probability

I took part in a brief discussion over at antimeta which reminded me that I ought to get back to a document I started writing on quantum mechanics for dummies. One of my pet peeves is that I believe there to be a little bit of a conspiracy to make quantum mechanics seem less accessible to people. Not a deliberate conspiracy - but people maintaining an aura of mystery about it that puts people off the subject. All of the fuzzy talk about quantum mechanics in the popular science press does nothing to help the situation. In particular, there is a core of quantum mechanics that I believe requires few prerequisites beyond elementary probability theory, vector spaces and complex numbers.

Anyway, I did some more digging on the web and found this course by Greg Kuperberg. The opening paragraphs almost take the words I wanted to say out of my mouth. In particular, despite the mystical mumbo-jumbo that is often written on the subject, the rules of quantum mechanics are "rigorous and clear" and "The precepts of quantum mechanics are neither a set of physical forces nor a geometrical model for physical objects. Rather, they are a variant, and ultimately a generalization, of classical probability theory." Most of all "...more mathematicians could and should learn quantum mechanics...". You don't even have to understand F=ma to get started with quantum mechanics and get to the point where you can really and truly get to grips, directly, with the so-called paradoxes of quantum mechanics such as the Bell Paradox. The strange thing is that you won't find words like this in most of the quantum mechanics textbooks. They throw you into physical situations that require finding tricky solutions to the Schrödinger equation while completely failing to give any insight into the real subject matter of quantum mechanics. Most QM books I know are really introductions to solving partial differential equations. (Remark to physicists: I bet you didn't know you could get the simultaneous eigenvalues for the energy and angular momentum operators for the hydrogen atom by a beautifully simple method that doesn't require even looking at a differential equation...) The best thing about the newly appearing field of quantum computing is that it's slowly forcing people to thing about quantum mechanics separately from the mechanics.

So even though I haven't read that course myself yet, I'm recommending that everyone read it :-) And some time I might get back to the even more elementary introduction I hope to put together.

Labels: ,

Thursday, March 30, 2006

A Neat Proof Technique

Last night I read this paper. There wasn't really anything in the paper I wanted, I was more interesting in flexing my newly developing computer science muscles. Anyway, like with all computer science papers I've managed to finish, I felt like I understood the paper but had no idea what the punch line was. Still, it was worthwhile because part way through there was a neat mathematical proof technique used that I think is worthy of a mention.

The paper is about the generalisations of 'fold' and 'unfold' that I played with recently. The usual fold function acts on a list like
[4,8,15,16,23,42].

Imagine it written as
cons(4,cons(8,cons(15,cons(16,cons(23,cons(42,[]))))))

where the cons function is a 'constructor' that constructs a list from a list element and another list. [] is the empty list. The fold function takes three arguments, a binary function f, a constant g, and a list. It then replaces cons by f and [] by g. For example
fold (+) 0 [4,8,15,16,23,42]
becomes
4+(8+(15+(16+(23+(42+0)))))

and so is 108, the sum of the numbers. An obvious theorem is that
fold cons [] x = x.


fold also generalises to structures such as trees. In this case there is another constructor that builds a tree from a node and its children. I'll stick with lists as the proof technique is much the same. The authors prove a bunch of theorems about the fold function. But now they want to prove something is true of all of the elements in a list (actually, a tree, but I use a list). Suppose the list is as above and you want to prove the property P holds for all elements in that list. Then you want to prove

P(4) && P(8) && P(15) && P(16) && P(23) && P(42) = True.

(I'm using Haskell's && to mean 'and'.) In other words you want to prove that
fold (&&) True [P(4),P(8),P(15),P(16),P(23),P(42)] = True.

The authors then proceed to show their theorem is true using the theorems about fold they just proved. Neat eh? But Maybe I haven't explained very well so I'll try to spell it out differently.

The authors want to prove theorems about all elements of a certain datastructure. So what they do is take the logical proposition that expresses what they want and rewrite the proposition so that the proposition itself in the same 'shape' as the datastructure. The statement of the truth of the proposition is an application of fold to the datastructure. So they can now use the theory of datastructures they develop to prove something about the structure of the logical proposition itself - namely that it's true. You need to read the actual paper (top left, page 5) to see the actual details of the proof technique.

In the particular case of lists the proof technique turns out to be standard induction. If you can prove P of an an element of the list, and you can prove "if P is true of an element it must be true of the next element", then P holds for every element of the list. Every time you generalise fold to a different datastructure there is a new induction principle that goes with it.

I suppose what they have actually done is prove a metatheorem about certain types of logical proposition and then used that to prove specific propositions.

Well, I hope that makes some kind of sense. I think it's quite a neat trick and that many of the paper's readers might miss the cool bit of self-reference that's going on here.

Labels:

Monday, March 27, 2006

The General Theory of Self-Reproducing Programs

I'm guessing that anyone reading this is already familiar with the idea of programs that output themselves. If you're not there's a great list of such programs here. But what I was surprised to discover at the weekend was that there is in fact a bit of general theory about this. In particular, what do we need in a computer language to guarantee we can write a self-replicator?

Consider the following in Haskell:

let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p"

Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example:


main(a){printf(a="main(a){printf(a=%c%s%c,34,a,34);}",34,a,34);}


Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator?

It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality.

We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R.

Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P.

With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an
interpreter would given input X.

So here's a proof:
Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)).

Now the fun starts:

[P](Q0(Q0)) = [Q0](Q0) (by definition of Q0)
= [Q0(Q0)] (by definition of P(x))


In other words Q0(Q0) is our fixed point.

So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too.

So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it.

Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus.

Stanislaw Lem has Passed Away

He has an obituary in The Times. If you haven't read any of his science fiction you really should read some. For lovers of mathematical poetry out there, here is a translation of a passage from his Cyberiad:


Come, let us hasten to a higher plane,
Where dyads tread the fairy fields of Venn,
Their indices bedecked from one to n,
Comingled in an endless Markov chain!

Come, every frustrum longs to be a cone,
And every vector dreams of matrices.
Hark to the gentle gradient of the breeze
It whispers of a more ergodic zone.

In Riemann, Hilbert or in Banach space
Let superscripts and subscripts go their ways.
Our asymptotes no longer out of phase,
We shall encounter, counting, face to face.

I'll grant thee random access to my heart,
Thou'lt tell me all the constants of thy love;
And so we two shall all love's lemmas prove,
And in our bound partition never part.

For what did Cauchy know, or Christoffel,
Or Fourier, or any Boole or Euler,
Wielding their compasses, their pens and rulers,
Of thy supernal sinusoidal spell?

Cancel me not--for what then shall remain?
Abscissas, some mantissas, modules, modes,
A root or two, a torus and a node:
The inverse of my verse, a null domain.

Ellipse of bliss, converge, O lips divine!
The product of our scalars is defined!
Cyberiad draws nigh, and the skew mind
Cuts capers like a happy haversine.

I see the eigenvalue in thine eye,
I hear the tender tensor in thy sigh.
Bernoulli would have been content to die,
Had he known such a2cos(2φ) !

The Most Amazing and Mysterious Thing in All of Mathematics

I think that a good candidate is the table of the Homotopy Groups of Spheres.


For those not familiar with algebraic topology, πm(Sn) is the set of equivalence classes of continuous functions from the m-dimensional sphere to the n-dimensional sphere where two functions are considered equivalent if they are homotopic. An easy way to visualise this is that two functions are homotopic if you can interpolate a continuous animation between them. (Can you guess what industry I work in?) This set also has a group structure which is straightforward to define but which I won't go into here (unless someone requests it). That's all there is to it. How can such simplicity generate such complexity?


Monstrous Moonshine is pretty mysterious too - but it takes a lot of work to state it for a non-expert. So this wins by default.


Like John Baez I also wonder about the curious appearance of 24 on row 3.


And an aside I discovered on Anarchaia: foldr.com related to an earlier post.

Labels:

Friday, March 24, 2006

6

Talking of category theory...I can't remember if I previously mentioned the bizarre functorial property of the number six I came across in Designs, Codes and their Links

This is the actual theorem:

consider the category whose objects are the n element sets and whose arrows are the bijections between the sets. This category has a non-trivial functor to itself only for n=6.

By smart use of Google Print you should be able view the proof. It's the first five pages of Chapter 6. (Don't make the obvious mistake with Google Print and end up with only three pages of that chapter.)

Anyway, it's not too hard to give a bit of insight into what this means. Consider the set with n elements. You can build all kinds of combinatorial objects which have some underlying set. Permuting the original n-element set induces a permutation of the combinatorial object and hence its underlying set. If the underlying set also has n elements you usually just end up with the original permutation. For example consider n=3 and the combinatorial object that is the even permutations on the 3 element set. This also has three elements. But the induced permutations on this new set are equivalent to the original permutations (via a bijection from the 3 element set to the set of its even permutations). On the other hand, if n=6 then you can construct another 6 element combinatorial object where the induced action of S6 is quite diffferent to the original one. In fact, it gives an outer automorphism of S6, another bizarre thing that only exists for n=6. To see the actual details of the construction look here.

I should also mention that Todd wrote a paper on this subject: The Odd Number 6, JA Todd, Math. Proc. Camb. Phil. Soc. 41 (1945) 66--68.

It's also mentioned in the Wikipedia but that's only because yours truly wrote that bit.

Anyway, I'm vaguely interested in how this connects to other exceptional objects in mathematics such as S(5,6,12), the Mathieu groups, the Golay codes, the Leech lattice, Modular Forms, String Theory, as well as Life, the Universe and Everything.

Labels:

Thursday, March 23, 2006

Sets, Classes and Voodoo

After much anticipation I started reading Barwise and Moss's "Vicious Circles". Unfortunately by chapter 2 I've reached the point where the statements don't just seem incorrect, they don't even seem to be propositions. I raised my question on USENET but I may as well mention it here too.

Here's how I understand the concept of a class in ZF Set Theory: talk about classes is really just talk about predicates. We enrich the language of Set Theory with a bunch of new terms ('class', 'subclass') and overload other terms ('is an element of', 'is a subset of') to give a new language that reifies classes, but instead of adding new axioms to deal with classes we provide a translation back to ZF without these extra terms. For example if P and Q are classes then "x is in P" means "P(x)" and "P is contained in Q means" "for all x, P(x) implies Q(x)" or even "a subclass of a set is a set" which translates to the axiom of separation. (We could alternatively add new axioms, instead of the translation, and then we'd get NBG Set Theory.)

Am I right so far? (By the way, nobody ever seems to say what I've just said explicitly. In particular, it seems to me that once you add the term 'class' you need to start proving metatheorems about classes to show what kind of deductions about them are valid, but nobody ever seems to do this.)

I understand that this is a sensible thing to do because of the overloading - in the enriched language sets and classes look similar and that allows us to do category theory, for example, in a much wider context, without having to define everything twice. (And also, maybe, because talk about sets is really a substitute for talk about classes...but that's a philosophical point for another day...)

So what does "If a class is a member of a class then it is a set" mean in the context of ZF? A class is really just a predicate, so it doesn't make sense to me that there could be a predicate about predicates. So at this point the book is looking like Voodoo.

Can anyone out there clarify this for me?

(Hmmm...I wonder if the translation to ZF is an adjoint functor...ow...MUST STOP OBSESSING ABOUT ADJUNCTIONS...)

Labels:

Wednesday, March 22, 2006

The Representation of Integers by Quadratic Forms

There's a nice article at Science News on the work of Bhargava on the representation of integers by quadratic forms. Rather than just restate what's written there (and in a few other blogs) let me quote the main theorem which is really quite amazing:

If Q is a quadratic form: Zn->Z then if the image of Q contains {1, 2, 3, 5, 6, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 34, 35, 37, 42, 58, 93, 110, 145, 203, 290}, then it contains every positiver integer.


Closely related is Conway's 15-theorem which originally inspired Bhargava and which (I think) I first read about in the excellent book The Sensual Quadratic Form Check out the section on topographs where Conway works his usual magic and makes some parts of the theory seem so clear and obvious that even your cat could start proving theorems about quadratic forms.

Labels:

Tuesday, March 21, 2006

Category Theory Screws You Up!

Well it does. Since I started a burst of intense category theory reading a couple of weeks ago (not that intense as I have a full time job) I've been showing unpleasant symptoms. These include insomnia, lack of concentration and grumpiness. We're not just talking correlation here, I have causal mechanisms too: how can I sleep when an example of an adjunction might pop into my mind at any moment, how can I concentrate when my brain is already fully occupied in finding those examples, and of course I'm grumpy with all this effort to understand difficult theorems that always turn out to be trivial and content-free. Fortunately I find that drugs help with the insomnia, but there's no cure for the other symptoms.

At least I haven't reached the stage where I sit down to dinner wondering whether or not my eating it is an operation with a left or right adjoint. (But I thought it didn't I, so I must be pretty far gone.) And I'm not dreaming commutative diagrams yet.

So here's my advice: if someone comes up to you in a shady bar or alleyway and offers you a monad, or an adjunction, or even an innocent little natural transformation, just say "no!".

Labels:

Friday, March 17, 2006

What can I do with adjoints? And Lemma 28.

By 'do' I mean 'compute'.

In order to teach myself about F-(co)algebras I wrote some literate Haskell that actually made use of them. The neat thing was that it gave me something for nothing. I wrote a generic unfold function (mostly not by thinking about it but by juggling all the functions that were available to me until I found one of the correct type) and I was amazed to find that it did something that I found useful, or at least interesting.

Can I do the same for adjunctions? Unfortunately, all of the examples I tried were basically uninteresting. They were quintessential category theory, you spend ages unpacking the definitions only to find that what you ended up with was trivial. Unlike the fold/unfold case I didn't find myself ever defining a new function that I could use for something.

I did just find this link with some intuition that I'd never read before: 'To get some understanding of adjunctions, it's best to think of adjunction as a relationship between a "path" to an element and a "data structure" or "space" of elements.' Hmmm...none of the category theory books talks about that. I think I can see it: a forgetful functor rubs out the path and just leaves you with the destination. (Don't take my word for it, I'm just guessing at this point.)

Maybe I'm now on target to be able to understand Lambek & Scott which I bought 20 years ago (I'm older than you thought), not really knowing what it was actually about. BTW Check out the review on Amazon: "I was looking for a book for my girlfriend this Christmas and stumbled upon this one..."

And I thought I'd mention Newton's Lemma 28. But after I asked a question about it on Usenet, some other people said far more interesting things that I could ever say.

Labels:

Thursday, March 16, 2006

Answers and questions

The answer to that cellular automaton puzzle was that it generates the derivatives of the tan function at zero. It uses the fact that if f=tan x, then f'=1+f^2. We can now repeatedly use this to compute higher derivatives, eg.

(f^n)' = nf^(n-1)f'=nf^(n-1)(1+f^2) = nf^(n-1)+nf^(n+1).

You should now see why the CA rule arises.

(Why n,n instead of (n-1),(n+1) as in the rule I gave? Because this is the 'push' version rather than the 'pull' version of the rule that I gave. If you imagine differentiation acting as a linear operation on the vectors with respect to the basis (1,f,f^2,f^3,...) then the 'push' and 'pull' versions are described by matrices that are transposes of each other.)

I was petty excited by this and tried applying it to all sorts of other functions. But actually it was hard to make it work. Maybe the fact that there is a nice formula for the tangent function is just an accident. Anyway, Jan Rutten, whose paper I got this from, has lots of nice papers on coalgebras and infinite streams. I think I've now learned enough about coalgebras to move onto the next section of my category theory book: Adjoints.

Wednesday, March 15, 2006

Homotopies between proofs and between programs

Many years ago, while out drinking after an algebraic topology seminar, someone mentioned the idea of using algebraic topology to analyse computer programs. I never did get the details but I'm guessing it must have been something like the material presented at this conference.

Anyway, the comment got me thinking. I spend much of my time refactoring code. This is making small modifications to software intended to restructure it to a better form without changing its behaviour (eg. I've just been dumped with a single C++ source file 0.25MB long most of which is one function and is in considerable need of some tidying up!). If you think of a program as a path to get you from an input to an output then a change to that program, that keeps the behaviour the same, is a kind of homotopy between paths. But I couldn't really get anywhere with that idea.

In Baez's This Week's Finds this week he talks about homotopies between proofs. These are ways to convert one proof of a proposition into another proof of the same proposition. Cut elimination is such an operation but there are surely others.

According to the Curry-Howard isomorphism, a computer program that maps type A to type B is essentially the same thing as a proof of B assuming A. So a homotopy between programs is the same thing as a homotopy between proofs. For example, Baez's example of converting between two proofs P -> R corresponds exactly to converting the piece of Haskell code f . ( g . h) to (f . g) . h using the associativity of . (. is Haskell for function composition.)

So, is there some kind of interesting topology to be extracted from this? Does the space of programs that map input x to output f(x) have interesting fundamental groups or homology groups? I guess a good place to start would be with two simple programs to perform the same operation that can't be transformed into each other by simple steps. And do I really need omega-categories to make sense of this?

Anyway, besides Baez's stuff there's also theo's comments with a real world example of what looks like a homotopy between proofs.

Tuesday, March 14, 2006

Cellular automaton puzzle

Here are the cells of a 1D cellular automaton:


+--+--+--+--+
|a0|a1|a2|a3| ...
+--+--+--+--+


Instead of a finite number of states the ai are integers.

Here's the update rule:

a0 <- a1
ai <- (i-1)*left+(i+1)*right, i>0

where left and right refer to the cells to the left or right of ai.

We start it in this state:

+--+--+--+--+--+--+--+
| 1| 0| 1| 0| 0| 0| 0| ... all zeroes
+--+--+--+--+--+--+--+


What is the significance of the numbers that appear in the leftmost cell?

If you can't figure it out the answer is here. (Though it might be easier to work it out for yourself than read that paper!)

I think this may actually be useful. I've spent quite a bit of time recently writing code to do the sorts of things that these automata do - but these automata may do it better.

Monday, March 13, 2006

Coalgebras and Automata

fold is a really useful operator in a number of programming languages, including Haskell. It also hs a dual partner, unfold. But more importantly, both can be generalised, using a little category theory, to F-algebras and F-coalgebras. Amazingly, unfold, in the right category (as I discovered after a weekend of frenzied category theory) gives a really nice way to convert descriptions of automata into runnable automata. Anyway, I wrote the whole thing up as a piece of literate Haskell.

UPDATE: That came out formatted ugly so I've moved it to my web site where I'll eventually format it better. Here's the link.

Thursday, March 09, 2006

Hyperfun!

Smullyan's favourite game is called the Hypergame, invented by Zwicker. I hadn't heard of it until today when I read about it in Barwise and Moss's book Vicious Circles.

Consider games where the players take turns and the game is guaranteed to terminate in a finite number of moves. These are call well-founded games.

The Hypergame is quite simple. The first player chooses a well-founded game. The game now turns into that game and the second player starts by opening in that game with play continuing in that game. Eg. if the first player says "Chess (with whatever house rules are required to make it terminate)" then they start playing Chess with the second player moving first. Because of the stipulation of well-foundedness the Hypergame lasts precisely one move more than some well-founded game. Therefore the Hypergame is itself well-founded and always terminates.

Anyway, Alice and Bob decide to play:

Alice: Let's play the Hypergame!
Bob: Cool! I'll go first. My move is to select the Hypergame.
Alice: OK, now we're playing the hypergame. So my first move is to select the Hypergame.
Bob: The Hypergame is cool, I pick that.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
Alice: I pick the Hypergame.
Bob: I pick the Hypergame.
etc.

Labels:

Wednesday, March 08, 2006

It's a square, square world!

I was browsing for books on elliptic functions on Ebay when I came across this link to a book by Oscar S Adams called "Elliptic Functions Applied To World Maps". This is pretty obscure, I thought to myself, so I googled Oscar. Sure enough, there really was an Oscar S Adams who worked on cartographic projections and his speciality was conformal (ie. angle preserving) projections including this one:


Anyway, there are lots of interesting conformal projections out there. Some of them are quite pretty so I thought I'd share. I like the use of the Schwarz-Christoffel transformation, better known (to me) from 2D fluid dynamics, to map the world into a variety of polygons. I was also surprised to see that Charles Sanders Peirce worked in this area.


Just for the hell of it I might place a bid on the book.


UPDATE: Come on! 'fess up if you're the person bidding against me!

Tuesday, March 07, 2006

Blind Games

I've always been fascinated by games that involve bluffing and I've always been more fascinated by 'blind' games. I define a 'blind' game to be one where there is some state in the game that a player can choose not to reveal. Instead the player claims what that state is. The other player can choose to accept or reject this state of affairs, but if it is rejected then the actual state is revealed and one or other player is 'punished' as appropriate. (Eg. blind chess isn't 'blind'.)

Consider this trivial game: you deal a pack of cards to N players. The players take turns and the winner is the first one to get rid of all their cards. A turn consists of discarding a set of cards that all have the same number or rank. This game isn't very interesting.

But now apply the 'blind transform' to this game with the 'punishment' of picking up all of the discards. The players play cards face down and make some claim about what cards they are laying down. If someone challenges this claim you have a look and mete out the punishment. A trivial game has now been turned into a mildly entertaining game. It can be turned into an excellent game by means of the 'ethyl transform' - playing whle drunk. But moving on...

One of my favourite games years ago, when I could talk people into playing it, was "Blind Le Truc". It's Le Truc but with all of the cards played face down but players claiming what the cards are. In tis case the punishment is losing a trick. Even played 'sighted' this game is full of bluff. Playing it blind brings it to a whole new level. Players can play against each other for extended periods of time playing what is practically an imaginary game going through all the motions of Le Truc without ever seeing a card. It's a lot of fun. Well, I think so anyway.

Even the most trivial of games can become interesting. Eg. each player is given an entire suit of a deck of cards. Each round a player places one card in the middle. Whoever places the highest card in the centre wins that round. Play continues until nobody has any cards left and the winner is whoever won the most rounds. After applying the blind transform, with round losing as punishment, this game is turned into something fun, at least for a few minutes.

Poker is also a blind version of a trivial game.

How amenable are these games to analysis? That last game is pretty simple. Suppose two players each have only 3 cards. What is the optimal way to play?

You can also apply the blind transform multiple times. It's more interesting if the nth application has a tougher punishment than the (n-1)th. In that case it corresponds to raising the stakes for being caught, or accusing someone of, 'cheating'.

Friday, March 03, 2006

When is one thing equal to some other thing?

That's the title of a rough draft of a paper by Barry Mazur (courtesy of Lambda the Ultimate). It's basically an introduction to the ideas of category theory but it has some nice comments along the way on what we mean by equivalence - often a tricky subject in category theory.


And here's a coincidence. I'd just done a web search to find out what properties the number 691 has. (It pops up in the theory of modular forms.) And then I see a link to this paper (completely independently) which uses 691 as an example on the first page. Weird!

Labels:

Thursday, March 02, 2006

An Actual Application of Fractal Dimension

According to this story, and others, a trove of 32 paintings, purported to be Jackson Pollocks, was discovered last year. But many people had doubts about whether or not they were really by the master of paint pouring or not. So a physicist, Richard Taylor, used some software to compute the fractal dimension of Pollock's paintings and compare them to these new paintings. It turned out that the fractal dimension didn't fit in with the trend of Pollock's work and hence they look like fakes.

Of course, me mentioning this fact isn't intended to be an endorsement of the reliability of these methods. I put as much trust in them as I'd put in using catastrophe theory to navigate my way through a prison riot.

Labels:

Wednesday, March 01, 2006

A Cautionary Tale for Would-Be Generalisers

You may or may not know this story already as it's been floating around for a while. But I'm going to retell it anyway:

Define sinc(x) = sin(x)/x and sinc(0) = 1

Let 'I' mean integration from 0 to infinity.

I sinc(x) = pi/2
I sinc(x)sinc(x/3) = pi/2
I sinc(x)sinc(x/3)sinc(x/5) = pi/2
I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7) = pi/2
I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9) = pi/2

You see the pattern right? So the story is that some guy was evaluating these integrals for some reason or other getting pi/2 all the time. They happily chugged on getting...

I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11) = pi/2
I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11)sinc(x/13) = pi/2

So when the computer algebra package this person was using said:

I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11)sinc(x/13)sinc(x/15) = 467807924713440738696537864469/935615849440640907310521750000*pi

they knew they'd hit a bug. They complained to the vendor who agreed that, yes, definitely, there was something screwy in their integration routine.

Except there wasn't. Weird eh?

I've known this story for a while but I've only just stumbled on Borwein's paper that explains the phenomenon. They're called Borwein Integrals now.

Update: Corrected a small error. It's pi/2 for the first 7 terms, not just the first 6. Even more amazing!

Labels: