I've run 3 miles, the Thanksgiving turkey's in the oven, now I just need to do something impossible before I can have breakfast.
As I've mentioned in the past, sometimes you can write useful Haskell code merely by writing something that type checks successfully. Often there's only one way to write the code to have the correct type. Going one step further: the Curry-Howard isomorphism says that logical propositions corresponds to types. So here's a way to write code: pick a theorem, find the corresponding type, and find a function of that type.
One area that seems fruitful for this approach is modal logic. The axioms of various modal logics correspond to the types of familiar objects in Haskell. For example the distribution axiom:
Looks just like the type of ap :: (Monad m) => m (a -> b) -> m a -> m b.
So I'm looking at the books on my shelf and there's The Logic of Provability by Boolos. It's about a kind of modal logic called provability logic in which □a roughly means "a is provable". One of the axioms of this logic is a theorem known as Löb's theorem.
Before getting onto Löb's Theorem, I should mention Curry's Paradox. (Before today I didn't know Haskell Curry had a paradox associated with his name - though I've met the paradox itself before as it got me into trouble at (high) school once...) It goes like this:
Let S be the proposition "If S is true, Santa Claus exists".
S is true.
If S is true, Santa Claus exists.
So, still assuming our hypothesis, we have
S is true and if S is true, Santa Claus exists.
Santa Claus exists.
In other words, assuming S is true, it follows that Santa Claus
exists. In otherwords, we have proved
If S is true then Santa Claus exists
regardless of the hypothesis.
But that's just a restatement of S so we have proved
S is true
and hence that
Santa Claus exists.
Fortunately we can't turn this into a rigorous mathematical proof though we can try, and see where it fails. In order to talk about whether or not a proposition is true we have to use some kind of Gödel numbering scheme to turn propositions into numbers and then if a proposition has number g, we need a function True so that True(g)=1 if g is the Gödel number of something true and 0 otherwise. But because of Tarski's proof of the indefinability of truth, we can't do this (to be honest, the argument above should be enough to convince you of this, unless you believe in Santa). On the other hand, we can replace True with Provable, just like in Gödel's incompleteness theorems, because provability is just a statement about deriving strings from strings using rewrite rules. If we do this, the above argument (after some work) turns into a valid proof - in fact, a proof of Löb's theorem. Informally it says that if it is provable that "P is provable implies P" then P is provable. We did something similar above with P="Santa Claus exists". In other words
So I'm going to take that as my theorem from which I'll derive a type. But what should □ become in Haskell? Let's take the easy option, we'll defer that decision until later and assume as little as possible. Let's represent □ by a type that is a Functor. The defining property of a functor corresponds to the theorem □(a→b)→□a→□b.
> loeb :: Functor a => a (a x -> x) -> a x
So now to actually find an implementation of this.
Suppose a is some kind of container. The argument of loeb is a container of functions. They are in fact functions that act on the return type of loeb. So we have a convenient object for these functions to act on, we feed the return value of loeb back into each of the elements of the argument in turn. Haskell, being a lazy language, doesn't mind that sort of thing. So here's a possible implementation:
> loeb x = fmap (\a -> a (loeb x)) x
Informally you can think of it like this: the parts are all functions of the whole and loeb resolves the circularity. Anyway, when I wrote that, I had no idea what loeb did.
So here's one of the first examples I wrote to find out:
> test1 = [length,\x -> x!!0]
loeb test is [2,2]. We have set the second element to equal the first one and the first one is the length of the list. Even though element 1 depends on element 0 which in turn depends on the size of the entire list containing both of them, this evaluates fine. Note the neat way that the second element refers to something outside of itself, the previous element in the list. To me this suggests the way cells in a spreadsheet refer to other cells. So with that in mind, here is a definition I found on the web. (I'm sorry, I want to credit the author but I can't find the web site again):
> instance Show (x -> a)
> instance Eq (x -> a)
> instance (Num a,Eq a) => Num (x -> a) where
> fromInteger = const . fromInteger
> f + g = \x -> f x + g x
> f * g = \x -> f x * g x
> negate = (negate .)
> abs = (abs .)
> signum = (signum .)
With these definitions we can add, multiply and negate Num valued
functions. For example:
> f x = x*x
> g x = 2*x+1
> test2 = (f+g) 3
Armed with that we can define something ambitious like the following:
> test3 = [ (!!5), 3, (!!0)+(!!1), (!!2)*2, sum . take 3, 17]
Think of it as a spreadsheet with (!!n) being a reference to cell number n. Note the way it has forward and backward references. And what kind of spreadsheet would it be without the sum function? To evaluate the spreadsheet we just use loeb test. So loeb is the spreadsheet evaluation function.
Now don't get the wrong idea. I'm not claiming that there's a deep connection betwen Löb's theorem and spreadsheet evaluation (though there is a vague conceptual similarity as both rely on a notion of borrowing against the future). Provability logic (as defined by Boolos) is classical, not intuitionistic. But I definitely found it interesting the way I was led directly to this function by some abstract nonsense.
Anyway, happy Thanksgiving!
PS There are other uses for loeb too. Check out this implementation of factorial which shows loeb to be usable as a curious monadic variant of the Y combinator:
> fact n = loeb fact' n where
> fact' 0 = return 1
> fact' n = do
> f <- ask
> return $ n*f (n-1)