I've run 3 miles, the Thanksgiving turkey's in the oven, now I just need to do something impossible before I can have breakfast.

As I've mentioned in the past, sometimes you can write useful Haskell code merely by writing something that type checks successfully. Often there's only one way to write the code to have the correct type. Going one step further: the Curry-Howard isomorphism says that logical propositions corresponds to types. So here's a way to write code: pick a theorem, find the corresponding type, and find a function of that type.

One area that seems fruitful for this approach is modal logic. The axioms of various modal logics correspond to the types of familiar objects in Haskell. For example the distribution axiom:

□(a→b)→(□a→□b)

Looks just like the type of

`ap :: (Monad m) => m (a -> b) -> m a -> m b`.

So I'm looking at the books on my shelf and there's The Logic of Provability by Boolos. It's about a kind of modal logic called provability logic in which □a roughly means "a is provable". One of the axioms of this logic is a theorem known as Löb's theorem.

Before getting onto Löb's Theorem, I should mention Curry's Paradox. (Before today I didn't know Haskell Curry had a paradox associated with his name - though I've met the paradox itself before as it got me into trouble at (high) school once...) It goes like this:

Let S be the proposition "If S is true, Santa Claus exists".

Suppose

S is true.

Then

If S is true, Santa Claus exists.

So, still assuming our hypothesis, we have

S is true and if S is true, Santa Claus exists.

And hence

Santa Claus exists.

In other words, assuming S is true, it follows that Santa Claus

exists. In otherwords, we have proved

If S is true then Santa Claus exists

regardless of the hypothesis.

But that's just a restatement of S so we have proved

S is true

and hence that

Santa Claus exists.

Fortunately we can't turn this into a rigorous mathematical proof though we can try, and see where it fails. In order to talk about whether or not a proposition is true we have to use some kind of Gödel numbering scheme to turn propositions into numbers and then if a proposition has number g, we need a function True so that True(g)=1 if g is the Gödel number of something true and 0 otherwise. But because of Tarski's proof of the indefinability of truth, we can't do this (to be honest, the argument above should be enough to convince you of this, unless you believe in Santa). On the other hand, we can replace True with Provable, just like in Gödel's incompleteness theorems, because provability is just a statement about deriving strings from strings using rewrite rules. If we do this, the above argument (after some work) turns into a valid proof - in fact, a proof of Löb's theorem. Informally it says that if it is provable that "P is provable implies P" then P is provable. We did something similar above with P="Santa Claus exists". In other words

□(□P→P)→□P.

So I'm going to take that as my theorem from which I'll derive a type. But what should □ become in Haskell? Let's take the easy option, we'll defer that decision until later and assume as little as possible. Let's represent □ by a type that is a Functor. The defining property of a functor corresponds to the theorem □(a→b)→□a→□b.

> loeb :: Functor a => a (a x -> x) -> a x

So now to actually find an implementation of this.

Suppose a is some kind of container. The argument of

`loeb`is a container of functions. They are in fact functions that act on the return type of

`loeb`. So we have a convenient object for these functions to act on, we feed the return value of

`loeb`back into each of the elements of the argument in turn. Haskell, being a lazy language, doesn't mind that sort of thing. So here's a possible implementation:

> loeb x = fmap (\a -> a (loeb x)) x

Informally you can think of it like this: the parts are all functions of the whole and

`loeb`resolves the circularity. Anyway, when I wrote that, I had no idea what

`loeb`did.

So here's one of the first examples I wrote to find out:

> test1 = [length,\x -> x!!0]

`loeb test`is [2,2]. We have set the second element to equal the first one and the first one is the length of the list. Even though element 1 depends on element 0 which in turn depends on the size of the entire list containing both of them, this evaluates fine. Note the neat way that the second element refers to something outside of itself, the previous element in the list. To me this suggests the way cells in a spreadsheet refer to other cells. So with that in mind, here is a definition I found on the web. (I'm sorry, I want to credit the author but I can't find the web site again):

> instance Show (x -> a)

> instance Eq (x -> a)

> instance (Num a,Eq a) => Num (x -> a) where

> fromInteger = const . fromInteger

> f + g = \x -> f x + g x

> f * g = \x -> f x * g x

> negate = (negate .)

> abs = (abs .)

> signum = (signum .)

With these definitions we can add, multiply and negate Num valued

functions. For example:

> f x = x*x

> g x = 2*x+1

> test2 = (f+g) 3

Armed with that we can define something ambitious like the following:

> test3 = [ (!!5), 3, (!!0)+(!!1), (!!2)*2, sum . take 3, 17]

Think of it as a spreadsheet with

`(!!n)`being a reference to cell number

`n`. Note the way it has forward and backward references. And what kind of spreadsheet would it be without the

`sum`function? To evaluate the spreadsheet we just use

`loeb test`. So

`loeb`is the spreadsheet evaluation function.

Now don't get the wrong idea. I'm not claiming that there's a deep connection betwen Löb's theorem and spreadsheet evaluation (though there is a vague conceptual similarity as both rely on a notion of borrowing against the future). Provability logic (as defined by Boolos) is classical, not intuitionistic. But I definitely found it interesting the way I was led directly to this function by some abstract nonsense.

Anyway, happy Thanksgiving!

PS There are other uses for

`loeb`too. Check out this implementation of factorial which shows

`loeb`to be usable as a curious monadic variant of the Y combinator:

> fact n = loeb fact' n where

> fact' 0 = return 1

> fact' n = do

> f <- ask

> return $ n*f (n-1)

## 26 comments:

Modal logics usually have these rules:

f (a -> b) -> f a -> f b

(This corresponds to the Applicative class, which may one day be made a superclass of Monad.)

f a -> a

Monads do not in general satisfy this. Perhaps comonads or something?

Eeep! The defining property of a functor is this:

fmap :: (a -> b) -> f a -> f b

and not

ap :: f (a -> b) -> f a -> f b

fmap is definitely not provable in any modal logic.

ashley,

As I say, I'm not sure there's a deep connection with Loeb's theorem! (But there is surely

somelink.)ashley,

OK, I have some spare moments now. I've spent quite a bit of time tinkering round with various axioms of modal logic but as you notice, things have a habit of not quite working out the way you want. There *is* a nice interpretation of □ as something like C/C++'s 'const'. There's a bunch of papers about it and I said something about it here. I've also seen some papers that uses the axiom GL (ie. Loeb's theorem) as a basis for describnig recursive types.

I've been playing around with the axioms to see if either □ or ◊ behave like some well-known Haskell class.

Firstly, neither of them are Functors, since you cannot prove either of these even in S5:

(a→b)→(□a→□b)

(a→b)→(◊a→◊b)

The second is less obvious than the first, but consider temporal logic: just because 'a' implies 'b' now, and 'a' is sometimes true, it doesn't mean that 'b' is ever true.

This is unfortunate, since axioms T and 4 are suggestive of Monads for ◊:

T: a → ◊a

4: ◊◊a → ◊a

K on the other hand doesn't look like anything I've seen in Haskell:

K: (◊a → ◊b) → ◊(a → b)

Interesting application for a compiler. The other approach would be to type a constant t as "t" and code that calculates t as "Code t". Less like modal logic, though.

Is your way of proving Löb's theorem anything like the regular proof? Your code uses recursion in a way that looks dangerously like a circular proof.

Lennart,

In the special case where f is the identity functor, loeb is precisely the usual Y combinator, so it's about as circular as you can get. I just used the type corresponding to Löb's theorem as inspiration for a function definition of that type. However, I must read this paper which makes proper use of a complete provability logic.

Do you know Benton et al.'s paper?

matthias,

Just had a look at that paper, which I didn't know. It looks like it addresses stuff I've been tossing about in my head for a little while. Not being in academia, I've no idea what are the good papers to read unless they get posted on Lambda the Ultimate. :-)

I realise this is now something of an old post but hopefully you're still picking up the comments. Just a quick question.

Your test3 doesn't seem to be a monomorphic list (and therefore ill-typed in Haskell), is this deliberate? Or were the 3 and 17 meant to be const 3 and const 17 respectively?

Great mind-boggling stuff, by the way :)

Disregard my last comment. After loading up the post in GHCi, I see why it works. The Num definition for functions allows 3 and 17 to be functions which are basically const 3 and const 17.

David,

Num's sneaky!

Modal logics usually have these rules:

f (a -> b) -> f a -> f b

(This corresponds to the Applicative class, which may one day be made a superclass of Monad.)

f a -> a

Monads do not in general satisfy this. Perhaps comonads or something?

Yes, they are connected to comonads.

Two functions

extract :: w a -> a

and

extend :: (w a -> b) -> w a -> w b

which follow these laws:

extend extract == id

extract . extend f == f

extend f . extend g == extend (f . extend g)

define a comonad.

You have a neat derivation of "cfix" there!

I bumped into this once myself when attempting to improve on the Essence of Dataflow semantics. I eventually found this post by David Menendez.

cfix :: Comonad d => d (d a -> a) -> a

which is dual to

mfix :: Monad t => (a -> t a) -> t a

We know mfix needs a special Haskell definition for each monad, but cfix does not, it's applicable to all comonads.

Usually, modal box matches a comonads an modal diamond matches a monad. S4's theorems support the functions:

a -> Dia a

Dia (Dia a) -> Dia a

Box a -> a

Box a -> Box (Box a)

which are (co)unit and (co)join of the (co)monad.

Along the lines of your spreadsheet analogy, you're not alone! I've written up a "non-empty graph" data structure as a comonad with the intent of making a cool spreadsheet semantics in Haskell. I need to dust that off someday...

In regards to fmap not being provable in modal logic, there might be a nuance here. In haskell, the type A matches Box A from a corresponding modal logic: values in Haskell are immutable and thus "necessary" (i.e. Haskell always assumes the trivial Identity comonad). So the type of fmap (for a monad) actually corresponds to:

fmap :: Box (a -> b) -> Dia a -> Dia b

which is a modal logic theorem (called Dia-K).

Satoshi Kobayashi's "Monad as Modality" is good reference (but sometimes hard to obtain). In regards to Yakeley's Code modality, monads seem to play a role in staged-computing, don't remember a reference though. Template Haskell maybe?

- Nick Frisby

Nick,

Well spotted! I looked hard on the web for something with the same signature as loeb but couldn't find anything. I haven't yet fully proved to myself that loeb and cfix are the same thing. They look like they do the same thing but it seems curious that I'm able to define loeb for any Functor, not just for a Comonad. When I have some time I'll investigate further - comonads being a bit of a hobby of mine these days.

Nick,

If you're still reading, I just worked out a tiny bit more detail here.

Do you know my opinion of a Num instance for functions?

Henning,

In both the 'humour' and 'proposal' categories!

For the interested, Eliezer Yudkowski has a

'Cartoon Guide' to Löb's theorem

If the box is represented in Haskell as a monad. What will be the representation of the diamond?

I would like to know how should the diamond operator be represented in Haskell :)

Noting for posterity, as this is where folks go when they seek the definition for this combinator:

This version improves the sharing slightly.

loeb x = xs where xs = fmap ($xs) x

After reading Ashley Yakeley's comments, I began to wonder what it would take to have a "proper" implementation that relied on the right assumptions. I ended up writing this post: A proof of Löb's theorem in Haskell.

Adding this link here for posterity, since many who come across this page will be interested in going deeper:

Kenny Foner gave a talk on how to build a faster version of `loeb` at Boston Haskell:

https://www.youtube.com/watch?v=F7F-BzOB670

Post a Comment