What's the use of a transfinite ordinal?
This post is going to talk about one player games (which I'll call 1PGs, or just games). They seem pretty stupid at first, but it turns out that they have a lot of rich and applicable theory. The idea is that at any position in a 1PG there is a set of new positions that you can move to. You lose when you have no legal moves and the goal is to avoid losing for as long as you can. For convenience, let's identify your position in a 1PG with the set of positions you could move to. A position in a 1PG is just another 1PG. So we can give the following recursive formal definition of a 1PG:
A one player game is either the losing position with no legal moves, denoted by ∅, or a set of one player games.
A sequence of legal moves in a 1PG, say x, is then a sequence of the form x ∋x0 ∋x1 ∋x2 ∋....
Did you notice that I sneaked in a really big restriction on 1PGs in that definition? My definition means that 1PGs are the same thing as sets. And sets are well-founded meaning that any sequence of the form x ∋x_0 ∋x1 ∋x2 ∋... eventually terminates. So my definition implicitly contains the restriction that you always eventually lose. That's fine for my purposes.
Now suppose that x ∋y ∋z is a legal sequence of plays in some 1PG. You might as well assume that z is in x. The reason is that if z were a legal move from x, you wouldn't take it, because in order to delay the inevitable you're going to prefer to go via y. So we'll assume that any game x is "transitively closed", ie. if x∋y and y∋z then x∋z.
Now I want to restrict things even further to the most boring kind of 1PG of all, games where the successor moves are all totally ordered. Intuitively what I mean is that you can think of every move in the game as simply moving you along a 'line'. Given any two positions in the game, x and y, either x∋y, or y∈x, or x=y. In these games there are no branches, the only thing you can do in these games is delay the inevitable. A bit like life really. We'll also use the notation a<b to mean a∈b.
I'll call these games ordinal 1PGs. You may have noticed that this is precisely the same as the ordinary definition of an ordinal.
Let's call the game ∅ by the name 0. 0 is sudden death. Recursively define the game n={0,1,2,3,n-1}. In the game of n you have up to n steps to play before you die (though you could hurry things along if you wanted to).
But what about this game: ω={0,1,2,...}. There's nothing mysterious about it. In your first move you choose an integer, and then you get to delay the end for up to n moves. This is our first transfinite ordinal, but it just describes a 1PG that anyone can play and which is guaranteed to terminate after a finite number of moves. We can't say in advance how many moves it will take, and we can't even put an upper bound on the game length, but we do know it's finite.
Given two games, a and b, we can define the sum a+b. This is simply the game where you play through b first, and then play through a. Because of the transitivity condition above you could simply abandon b at any stage and then jump into a, but that's not a good thing to do if you're trying to stay alive for as long as possible. The first example of such a 1PG is the game ω+1. You get to make one move, then you choose an n, and then you get to play n.
What about 1+ω? Well you get to choose n, play n moves, and then play an extra move. But that's exactly like playing ω and making the move n+1. So 1+ω and ω are the same game. Clearly addition of 1PGs isn't commutative.
We can multiply two games as well. Given games a and b, ab is defined like this: we have game a and game b in front of us at the same time, a on the left and b on the right. At any turn we have a choice: make a move in the game on the left, or make a move in the game on the right, resetting the left game back to a again. It's like playing through b, but playing through a copy of a at each turn. Note how for integers a and b the game ab is just ab, where the former is game multiplication, and the latter is ordinary arithmetical multiplication.
We can also define exponentiation. We can define an, for finite integer n, in the obvious way as ω·ω·...·ω n times. Before getting onto more general exponentiation I need to classify ordinal games. There are three kinds. There's
(1) the zero game 0=∅.
(2) the games where there is one 'largest' next move in the game, the one you should take. These are games of the form a+1 and the next position in the game, if you're playing to survive, will be a. For example, the game 7 is 6+1.
(3) the games where there is no single best choice of next move. These are games where there is no largest next move, but instead an ever increasing sequence of possible next moves and you have to pick one. The first example is ω where you're forced to pick an n. These are called limit ordinals.
We can use this to recursively define ab using the classification of b.
(1) If b=0 then by definition ab=1 and that's that.
(2) if b=c+1, then ab=ac·a.
(3) if b is a limit ordinal then a move in ab works like this: you pick a move in b, say c, and now you get to play ac (or, by transitive closure, any smaller game).
For example, you play ωω like this: in your first move you pick an n, and then you play ωn. Note how the state of play never becomes infinite, you're always playing with finite rules on a finite 'board'. This is an important point. Even though ωω3+22+ω+4 seems like it must be a very big kind of thing, it describes a finite process that always terminates.
So what use are these things?
Suppose you have an algorithm that runs one step at a time, with each step taking a finite time, and where the internal state of the program at step t is st. Suppose that you can find a map f from program states to the ordinals such that f(st+1)<f(st), ie. the program always decreases the ordinal associated to a state. Then the program must eventually terminate.
For example, suppose the program iterates 10 times. We can simply assign f(st)=10-t. Suppose instead your algorithm computes some number n at the first step, though you can't figure out what that number is (or maybe it accepts n as an input from a user), then we can use f(0)=ω and so on. We don't know in advance what n is, but that doesn't matter. Whatever n is entered, the program will terminate.
This technique is particularly useful for proving termination of rewrite rules. For example, with the rewrite rules generated by the Buchberger algorithm we can map the first step to ωn, for some n. And if we don't know what n is, we can just start with ωω. If you take a peek at the papers of Nachum Dershowitz you'll see how he applies this technique to many other types of rewrite rule.
Transfinite ordinals are not as esoteric as they may first appear to be. Using them to prove termination goes back to Turing and Floyd but I learnt about the method from reading Dershowitz. (I think Floyd suggested using the method to prove termination of the standard rewrite rules for computing derivatives of expressions. It's tricky because differentiating a large product, say, can result in many new terms, so there's a race between the derivative 'knocking down' powers, and products causing growth.)
Probably the best example of using ordinals to prove the termination of a process is in the Hydra game. What's amazing about this game is that you need quite powerful axioms of arithmetic to prove termination.
One more thing. All of this theory generalises to two player games.
Update: I think mjd may have been about to write on the same thing. On the other hand, though he mentions program termination, he says of it "But none of that is what I was planning to discuss". So maybe what I've written can be seen as complementary.
A one player game is either the losing position with no legal moves, denoted by ∅, or a set of one player games.
A sequence of legal moves in a 1PG, say x, is then a sequence of the form x ∋x0 ∋x1 ∋x2 ∋....
Did you notice that I sneaked in a really big restriction on 1PGs in that definition? My definition means that 1PGs are the same thing as sets. And sets are well-founded meaning that any sequence of the form x ∋x_0 ∋x1 ∋x2 ∋... eventually terminates. So my definition implicitly contains the restriction that you always eventually lose. That's fine for my purposes.
Now suppose that x ∋y ∋z is a legal sequence of plays in some 1PG. You might as well assume that z is in x. The reason is that if z were a legal move from x, you wouldn't take it, because in order to delay the inevitable you're going to prefer to go via y. So we'll assume that any game x is "transitively closed", ie. if x∋y and y∋z then x∋z.
Now I want to restrict things even further to the most boring kind of 1PG of all, games where the successor moves are all totally ordered. Intuitively what I mean is that you can think of every move in the game as simply moving you along a 'line'. Given any two positions in the game, x and y, either x∋y, or y∈x, or x=y. In these games there are no branches, the only thing you can do in these games is delay the inevitable. A bit like life really. We'll also use the notation a<b to mean a∈b.
I'll call these games ordinal 1PGs. You may have noticed that this is precisely the same as the ordinary definition of an ordinal.
Let's call the game ∅ by the name 0. 0 is sudden death. Recursively define the game n={0,1,2,3,n-1}. In the game of n you have up to n steps to play before you die (though you could hurry things along if you wanted to).
But what about this game: ω={0,1,2,...}. There's nothing mysterious about it. In your first move you choose an integer, and then you get to delay the end for up to n moves. This is our first transfinite ordinal, but it just describes a 1PG that anyone can play and which is guaranteed to terminate after a finite number of moves. We can't say in advance how many moves it will take, and we can't even put an upper bound on the game length, but we do know it's finite.
Given two games, a and b, we can define the sum a+b. This is simply the game where you play through b first, and then play through a. Because of the transitivity condition above you could simply abandon b at any stage and then jump into a, but that's not a good thing to do if you're trying to stay alive for as long as possible. The first example of such a 1PG is the game ω+1. You get to make one move, then you choose an n, and then you get to play n.
What about 1+ω? Well you get to choose n, play n moves, and then play an extra move. But that's exactly like playing ω and making the move n+1. So 1+ω and ω are the same game. Clearly addition of 1PGs isn't commutative.
We can multiply two games as well. Given games a and b, ab is defined like this: we have game a and game b in front of us at the same time, a on the left and b on the right. At any turn we have a choice: make a move in the game on the left, or make a move in the game on the right, resetting the left game back to a again. It's like playing through b, but playing through a copy of a at each turn. Note how for integers a and b the game ab is just ab, where the former is game multiplication, and the latter is ordinary arithmetical multiplication.
We can also define exponentiation. We can define an, for finite integer n, in the obvious way as ω·ω·...·ω n times. Before getting onto more general exponentiation I need to classify ordinal games. There are three kinds. There's
(1) the zero game 0=∅.
(2) the games where there is one 'largest' next move in the game, the one you should take. These are games of the form a+1 and the next position in the game, if you're playing to survive, will be a. For example, the game 7 is 6+1.
(3) the games where there is no single best choice of next move. These are games where there is no largest next move, but instead an ever increasing sequence of possible next moves and you have to pick one. The first example is ω where you're forced to pick an n. These are called limit ordinals.
We can use this to recursively define ab using the classification of b.
(1) If b=0 then by definition ab=1 and that's that.
(2) if b=c+1, then ab=ac·a.
(3) if b is a limit ordinal then a move in ab works like this: you pick a move in b, say c, and now you get to play ac (or, by transitive closure, any smaller game).
For example, you play ωω like this: in your first move you pick an n, and then you play ωn. Note how the state of play never becomes infinite, you're always playing with finite rules on a finite 'board'. This is an important point. Even though ωω3+22+ω+4 seems like it must be a very big kind of thing, it describes a finite process that always terminates.
So what use are these things?
Suppose you have an algorithm that runs one step at a time, with each step taking a finite time, and where the internal state of the program at step t is st. Suppose that you can find a map f from program states to the ordinals such that f(st+1)<f(st), ie. the program always decreases the ordinal associated to a state. Then the program must eventually terminate.
For example, suppose the program iterates 10 times. We can simply assign f(st)=10-t. Suppose instead your algorithm computes some number n at the first step, though you can't figure out what that number is (or maybe it accepts n as an input from a user), then we can use f(0)=ω and so on. We don't know in advance what n is, but that doesn't matter. Whatever n is entered, the program will terminate.
This technique is particularly useful for proving termination of rewrite rules. For example, with the rewrite rules generated by the Buchberger algorithm we can map the first step to ωn, for some n. And if we don't know what n is, we can just start with ωω. If you take a peek at the papers of Nachum Dershowitz you'll see how he applies this technique to many other types of rewrite rule.
Transfinite ordinals are not as esoteric as they may first appear to be. Using them to prove termination goes back to Turing and Floyd but I learnt about the method from reading Dershowitz. (I think Floyd suggested using the method to prove termination of the standard rewrite rules for computing derivatives of expressions. It's tricky because differentiating a large product, say, can result in many new terms, so there's a race between the derivative 'knocking down' powers, and products causing growth.)
Probably the best example of using ordinals to prove the termination of a process is in the Hydra game. What's amazing about this game is that you need quite powerful axioms of arithmetic to prove termination.
One more thing. All of this theory generalises to two player games.
Update: I think mjd may have been about to write on the same thing. On the other hand, though he mentions program termination, he says of it "But none of that is what I was planning to discuss". So maybe what I've written can be seen as complementary.
3 Comments:
The use that I was introduced to may years ago is that mathematical induction works on transfinite ordinals. (You need to prove two induction steps: One is the successor ordinal case, and the other is the limit ordinal case.)
The reason why this is useful is that there are some "find the fixpoint" methods that don't terminate even after ω steps, but they do terminate some time after that. So if, say, you need to find the greatest fixpoint of a functional, sometimes you need to go further than ω.
Pseudonym,
Yes, I read that recently in Davey and Priestley's "Lattices and Order".
Gentzen proved Peano arithmetic consistent using ordinals (up to epsilon nought). This was a few years after Goedel showed it couldn't be done from within Peano arithmetic. It drives me nuts that people still claim nobody has proven arithmetic is consistent.
Incidentally, does anyone know of the relationship between the use of ordinals for strong normalization proofs and the use of logical relations for the same?
Post a Comment
<< Home