A number of branches of mathematics have some sort of implicit function theorem. These guarantee that we can define functions implicitly, rather than explicitly. For example, we can define the function f from the positive reals to the positive reals by the relation f(x)^2 = x. In this case we can write an explicit formula, f(x) = x1/2, but the implicit function theorem gives quite general conditions when the implicit equation defines a function uniquely, even when it is very hard to write an explicit formula. We have something similar in the theory of datatypes. We define the list type implicitly in terms of lists: L(X) = 1+X L(X). There is a unique smallest type (and unique largest type) that satisfies this equation and we use such equations in Haskell programs to uniquely specify our types. (Haskell uses the largest type.)
Provability logic also has a kind of implicit function theorem. Consider the following function:
> f0 p = Neg (Box p)Suppose the equation p <-> f0 p had a solution. Then we would have a proposition that asserts its own unprovability. In fact, we know such a proposition: the Godel sentence from Godel's first incompleteness theorem. However, the Godel sentence uses a clever Quining technique to make a proposition that makes assertions about its own Godel number. We can't express such a thing in the language of provability logic. However, consider Godel's second incompleteness theorem. This tells us that if PA can prove its consistency then it is in fact inconsistent. Or to turn it around, if PA is consistent, then PA can't prove its consistency. Asserting that PA is consistent is the same as ◊⊤ (because asserting that ⊤ is consistent with the rest of PA is the same as saying the rest of PA is consistent). So Godel's second incompleteness theorem can be written in provability logic as ¬◻(◊⊤)→◊⊤. Using the methods of part one we can show that ◊⊤→¬◻(◊⊤) is also valid. So Dia T is the solution to p <-> f0 p:
> test0 = valid $ let p = Dia T in p <-> f0 pConsidered as a data structure, notice how f0 places its argument inside a Box. Consider just the functions Prop -> Prop that (1) simply copy p into various places in its return value (ie. that don't analyse p in any way) and that (2) put all copies of its arguments somewhere inside a Box (possibly in a deeply nested way). We'll call these 'modalising' functions. Then there is an amazing result due to Solovay:
The Fixed Point Theorem
If f is modalizing then it has a unique fixed point in provability logic. (Unique in the sense that if p and q are fixed points for f, then p <-> q is valid.
Our goal in the next post in this series will be to implement an algorithm to find these fixed points.
Think about what this means. We can make up any old function that has this property and find a corresponding Godel-like theorem. For example, pick
> f1 p = Box p --> Box (Neg p)This has a fixed point p = Box (Box F) -> Box F which we can test with
> test1 = valid $ let p = Box (Box F) --> Box F in p <-> f1 pOr in English (quoting verbatim from Boolos) "a sentence is equivalent to the assertion that it is disprovable-if-provable if and only if it is equivalent to the assertion that arithmetic is inconsistent if the inconsistency of arithmetic is provable". (That's a good argument for using provability logic instead of English!) These are the generalisations of Godel's theorem in my title.
But there are many other important consequences. The solution to this fixed point equation doesn't involve any self-reference (because we can't do self-reference in provability logic as we have no way to talk about Godel numbers). Self-reference is less essential than you might think - we can solve these equations without using it.
At first the fixed point theorem seems like a positive thing: we can crank out as many of these theorems as we like. But there's also a flip side: it says that once we've proved Lob's theorem, there aren't any more techniques we need in order to construct fixed points. So there aren't any funky new variations on what Godel did, waiting to be discovered, that will give us a bunch of new fixed points. We have them all already.
In order to find these fixed points I'm going to use the runTableau function we wrote last time. But that's for the next article in the series. In this post I want to put it to a simpler use:
Automatically Drawing Tableau Diagrams
First we'll need a very simple library of functions for drawing ASCII art diagrams. We'll represent a drawing simply as a list of ASCII strings, one for each row. Think of diagrams as forming a box. The width of a box is the length of its longest row:
> width box = foldr max 0 (map length box)Sometimes we'll want to pad the length of the rows so that they all have the same length:
> pad len b a = a ++ replicate (len - length a) bThe aside function allows us to 'typeset' one box next to another:
> aside a b = let w = width a > h = max (length a) (length b) > a' = pad h [] a > b' = pad h [] b > a'' = map (pad w ' ') a' > in zipWith (++) a'' b'We can draw nice frames around our boxes:
> frame a = let w = width a > h = length a > strut = replicate h "|" > rule = "+" ++ replicate w '-' ++ "+" > in [rule] ++ strut `aside` a `aside` strut ++ [rule]And the rule for disjunctions requires us to draw side by side boxes with a vertical line between them:
> alt a b = let h = max (length a) (length b) > strut = replicate h " | " > in a `aside` strut `aside` bGenerating diagrams is now a matter of generating a diagram for each of the 'hooks' in our algorithm:
> diagramRules = TableauRules {We'll modify the rules so that instead of simply returning a Bool to indicate closure we return a pair. The first element is a list of strings representing the rows of the ASCII art, but the second element plays the same role as before:
> closes = snd,These are the functions that handle discovery of a contradiction. They draw a "X" in a circle to indicate this:
> foundF = \_ -> (["(X)"], True), > foundContradiction = \_ -> (["(X)"], True), > open = \_ -> ([], False),We deal with conjunctions simply by listing the two subpropositions that went into them:
> conjRule = \a b -> first ([show a, show b] ++),For disjunctions we draw the diagrams:
> disjRule = \p a b (tl, lb) (tr, rb) -> > (([show a] ++ tl) `alt` ([show b] ++ tr), lb && rb), > doubleNegation = \q -> first ([show q] ++), > combineWorlds = \(t0, b0) (t1, b1) -> (t0 ++ t1, b0 || b1), > processWorld = \_ t -> t,When we create a box for a new world we 'import' some propositions from the parent world. We mark these with an asterisk:
> tableau = \br (t, b) -> (frame (map (("* " ++) . show) br ++ map (" "++) t), b) > } > diagram p = do > let (t, b) = runTableau diagramRules [Neg p] > mapM_ putStrLn t > print bNow we can prove that we really do have a fixed point of f1:
> diagram1 = diagram $ let p = Box (Box F) --> Box F in p <-> f1 pYou may want to pick a really small font and stretch your terminal wide before you run that:
See how every box ends in an (X), just as we want.
note
To run the code above, just append this article to the previous one to make a single literate Haskell program.
Is there a topological reason for the fixed points to exist?
ReplyDeleteAndrej,
ReplyDeleteThe fixed point theorem seems analogous to topological fixed point theorems. And it follows from the Beth definability theorem which seems analogous to the implicit function theorem. But the various sources I have referred to make no mention of the analogies.
Andrej,
ReplyDeleteBut maybe there's a good question for mathoverflow there.