### The Game of Thermonuclear Pennies

The game of Thermonuclear Pennies is a lot like the game of Nuclear Pennies. It's played on a semi-infinite strip of cells, extending to infinity on the right, with a bunch of pennies in each cell (and a finite number in total). Instead of a single penny fissioning into two pennies, it now splits into three adjacent pennies. And conversely, three neighbouring pennies may be fused into one.

Here are examples of legal moves:

and

Again the puzzles consist of trying to get from a start position to a target position. Here's a nice example:

Just as with Nuclear Pennies we can assign numerical values to positions in this game in such as way that if there is a legal sequence of moves from A to B then the value of A equals the value of B. In this case we assign the values very slightly differently. Each cell is assigned a value as follows:

Where i is the usual square root of -1. The value of a position is simply the sum of the values of the pennies where each penny takes on the value of the cell it sits in. So

As before, we say that moving from A to B is paralegal if A and B have the same value. Because i satisfies i=1+i+i

^{2}it should be clear that legal implies paralegal again. But here's a surprising fact: if both A and B have pennies not on the leftmost cell, then a move from A to B is legal if it is paralegal. In other words,

*we can tell if a sequence of moves is possible just by looking at the numerical value of the start and end points*. What's more, the corresponding result also holds for Nuclear Pennies and a wide variety of related games besides. Before proving that I want to talk about the algebraic structure of these types of games. (And I just figured out how to procedurally generate diagrams with Omnigraffle using Applescript so it's an excuse to draw lots of diagrams.)

(BTW If we're allowed to have a negative number of pennies in a cell then you can simply treat a position in these games as polynomials with integer coefficients. You can then use standard theorems about polynomials to prove the result in a straightforward way. But those theorems rely on subtraction, and without negative numbers those methods fail.)

Firstly, we can add positions in Thermonuclear pennies (which I'll now call TNP). Simply add the numbers of pennies in each cell:

We can also multiply positions. We do this by making a 'multiplication table' from the original positions and then summing along the lower-left to upper-right diagonals. I hope this example makes it unambiguous:

Exercises. Convince yourself that A+B=B+A, A*(B+C) = A*B+A*C, (A+B)*C = A*C+B*C, A*(B*C) = (A*B)*C.

If you did the exercises, you've now shown that TNP positions form a commutative semiring, or rig, with the empty board serving as 0.

Now we're ready to use a proof from Objects of Categories as Complex Numbers by Fiore and Leinster. If we define

then every position is a polynomial, with non-negative integer coefficients, in x. We can also interpret the equation x=1+x+x

^{2}as saying that fission and fusion are legal moves. More generally, we consider two positions equivalent if there is a sequence of legal moves going from one to the other where each move maps f(x)+x to f(x)+1+x

^{2}or the converse. If we define p

_{1}(x)=x and p

_{2}(x)=1+x

^{2}then equation (3) in that paper defines exactly what we mean by a sequence of legal moves. (BTW For those wondering about the order in which I wrote this, I read that definition after inventing the game :-) So now we can apply Theorem 5.1 to find

### Corollary

Let q

_{1}(x) and q

_{2}(x) represent TNP positions with at least one penny somewhere other than the far left, then if x^2+1=0⇒q

_{1}(x)=q

_{2}(x) ring-theoretically, then there is a legal sequence of moves from q

_{1}(x) to q

_{1}(x).

"x^2+1=0⇒q

_{1}(x)=q

_{2}(x)" is just another way of saying q

_{1}(i)=q

_{2}(i). So we have a simple way to tell whether there is a legal way of getting from one position to another. The puzzle example I gave above is soluble because i

^{5}=i.

Actually, the corollary isn't too hard to prove without the theorem. Here's a hint for how to do it. If we allow negative numbers of pennies the puzzle is fairly easy to solve. But we don't need negative pennies because if there is at least one penny, we can saturate as many positions as we like with as many pennies as we like simply by madly fissioning pennies all over the place in a big chain reaction. So we start by doing the chain reaction to borrow lots of pennies, then carrying out the solution using negative numbers (which won't actually ever go negative if our chain reaction was big enough) and then reversing the chain reaction to pay back what we borrowed. (It's a bit like real life. In a financial market without negative numbers there are many transactions that can't be performed. But as soon as we allow borrowing we open up many more possibilities.)

### Embedding Complex Numbers as Types

So back to types. People have frequently found the need to embed the natural numbers as types. A popular scheme is something (in Haskell) like

> data Zero

> data S a = S a

> type Two = S One

> type Three = S Two

and so on. Then we can go on to define addition and multiplication. But types already have a natural addition and multiplication: the type constructors

`Either`and

`(,)`. The problem is that, for example,

`Either One Three`isn't the same type as

`(Two,Two)`. We could relax things a bit and allow isomorphism instead of equality. But even then, these types aren't isomorphic. Instead we could define:

> data Zero

> data Unit = Unit

> type S a = Either Unit a

> type One = S Zero

> type Two = S One

> type Three = S Two

Now we can use

`(,)`as addition and multiplication.

But if you're content to live with isomorphism then maybe we could embed other types. Consider the type

> data Tree = Leaf | Trunk Tree | Fork Tree Tree

It's easy to write an isomorphism

`Tree -> Either One (Either Tree (Tree,Tree))`. In other words, up to isomorphism we have

`Tree=1+Tree+Tree`. If you remember my earlier post it should be clear that legal sequences of moves in TNP give rise to isomorphisms of types constructed from

^{2}`Tree`. In other words, theorems about TNP apply to

`Tree`. Therefore given two polynomials, p

_{1}and p

_{2}, p

_{1}(

`Tree`)=p

_{2}(

`Tree`) if and only if p

_{1}(i)=p

_{2}(i), as long as the p

_{i}contain non-constant terms. Looked at another way, given any Gaussian integer, a+bi, we can embed this as a type in such a way that the embedding respects

`Either`and

`(,)`. In fact the type

aTree+bTree^{2}+cTree^{3}+dTree^{4}

embeds (d-b)+(a-c)i. For example, abusing Haskell notation,

> type Zero = Tree+Tree^{3}

really does act like zero in that (

`Zero`,p(

`Tree`)) has an isomorphism with

`Zero`for any non-constant polynomial p.

As far as I can see, this fact is completely and utterly useless...

NB When I say isomorphism above I mean "particularly nice isomorphism", which in this case means an isomorphism that takes time O(1). Otherwise all countable tree structures would trivially be isomorphic.