Haskell types and total functions, at least for a suitable subset of Haskell, form a category, with types and functions as the objects and arrows. Haskell code is usually full of expressions whose types don't correspond to arrows (e.g..

`1::Int`). But in category theory we can only really talk of objects and arrows, not elements of objects. Many categories are made up of objects that have no reasonable notion of an element. By writing code in pointfree style we can eliminate reference to individual elements and talk only about arrows.

For example, suppose

`f :: A -> B`and

`g :: B -> C`. Then we can define

`h :: A -> C`in a couple of different ways. Pointfree:

h = g . f

or pointful:

h x = let y = f x

z = g y

in z

For examples as simple as this it's not hard to write code that will translate between one style and the other. But the former definition has an advantage, it works in any category. We'll consider a very simple example. Let R be the category of real numbers, where there is an arrow from x to y if x≤y. We can think of f and g as arrows in this category. If we know A≤B and B≤C then the first definition above tells us how to construct an arrow from A to C and hence it proves that A≤C. The second definition makes no sense because it relies on the notion of x as an element of A and uses the functions f and g to construct elements of B and C. These words hold no meaning in the category R where the arrows aren't functions and the objects aren't containers of elements.

Except that's not quite true.

Because there is a scheme for translating the second pointful definition to pointfree form, the second definition does in fact provide a proof that A≤C. We just have to bear in mind that the proof needs to be translated into pointfree form first. In fact, we can happily spend our day using pointful style to generate proofs about R, as long as at the end of the day we translate our proofs to pointfree notation. In fact, Haskell programmers know that it's often much easier to write programs in pointful style so it seems reasonable to guess that there are many proofs that are easier to write in pointful style even though they can't be interpreted literally. Philosophically this is a bit weird. As long as we restrict ourselves to chains of reasoning that can be translated, we can use intuitions about elements of objects to make valid deductions about domains where these notions make no sense.

Part I of Lambek and Scott is about the correct pointful language to use when talking about cartesian closed categories (CCCs). They use a form of typed lambda calculus. Every arrow in a cartesian category can be written as a pointful lambda expression. Even though there isn't a meaningful way to assign meaning to the 'points' individually, every lambda abstraction gives an arrow in a cartesian closed category that can be built using the standard parts that come with a CCC, and vice versa. The pointful definition of

`h`above is an example. Here's a (very) slightly less trivial example. Given

`f :: A -> C`and

`g :: B -> D`we can define

`h :: (A,C) -> Either B D`as

f x = let y = fst x

z = Left y

in z

Again, that code can be translated into pointfree form using only the standard parts available in a cartesian closed category. Actually, we only need a category with products and coproducts for this example. So consider a lattice. The above code can be translated into a proof that A∩C≤B∪D. (I'm using cap and cup for join and meet.) Again, it makes no sense to talk of elements of a general lattice as containers of elements, and yet after translation to pointfree notation we have a valid proof. If you can restrict yourself to a suitable subset of set theory then your proofs involving elements of objects can carry over to many other categories.

Part II of Lambek and Scott is about taking this idea to extremes. It's about categories known as toposes. A topos is a category that is even more like the category of sets than a CCC. It's still general enough that there are many kinds of toposes, but you can use a sizable portion of first order logic and ZF to make deductions about them. Again, the literal notion of membership of the objects of a topos might make no sense, but the proofs have a translation to pointfree notation. In fact, it's possible to write entire papers in what looks like conventional set theory language, and have them be valid for other toposes. Anders Kock, for example, writes such papers. Chris Isham has been arguing that topos theory is the correct framework for physics. If you interpret your propositions as being in the category Set then you get classical physics. But those same propositions can be interpreted in other categories, such as one for quantum mechanics, giving a way to use and extend classical language to reason about quantum systems. This set theory-like language is known as the "internal language" of a topos.

Anyway, I'm interested in the notion that Haskell do-notation provides another kind of pointful language that can be used to reason about situations where points don't seem at first to make sense. Consider the lattice of subsets of a topological space, ordered by inclusion. Let Cl(U) be the closure of U. Cl satisfies these properties:

X⊂Y implies Cl(X)⊂Cl(Y)

X⊂Cl(X)

and

Cl(CL(X))⊂Cl(X)

Look familiar? If we make this lattice into a category, with arrows being inclusions, then the first property states that Cl is a functor and the next two say that Cl is a monad. In fact, monads are a kind of generalised closure. So now suppose we're given A⊂Cl(B) and B⊂Cl(C) and wish to prove that A⊂Cl(C). We can rephrase this by saying that if

`f :: A -> Cl B`and

`g :: B -> Cl C`we need an arrow

`h :: A -> Cl C`. We can write one like this:

h x = do

y <- f x

z <- g y

return z

Now it's tempting to interpret the inclusions as functions with

`y <- f x`saying that

`y`is the image of

`x`under the inclusion. (I don’t know about you, but when I write

`x <- getChar`I think to myself “

`x`is the return value from calling

`getChar`”.) But that doesn't really work because the type of

`f x`is

`Cl B`but

`y`is of type

`B`. On the other hand, we can radically reinterpret the above as something like this: when arguing about chains of inclusions of subsets of a topological space, as long as at the end of the chain of inclusions you always end up in the closure of some subset, you're allowed to cheat and nudge a generic point in the closure of a subset back into the original subset. This is exactly parallel to the way do-notation seems to allow us to extract elements out of monadic objects as long as at the end of the do-block we always return an element of monadic type. I'm sure that with a bit of work we could produce a rigorous metatheorem from this. I also expect we can also produce something similar for comonads.

Anyway, the moral is that when working with categories with monads there are may be some interesting and unusual ways to reason. The example of the lattice of subsets is fairly trivial but I'm sure there are other interesting examples. I also expect there's a nice connection with modal logic. I now think of Haskell do-notation as the "internal language" of a category with monads.

Update: I left out the crucial sentences I meant to write. It's easy to see do-notation as a kind of Haskell specific trick for making things like IO heavy code look like traditional procedural code. But comparison with the theory in Lambek and Scott, and Topos Theory in general, makes it clear that do-notation is a member of a family of related languages.

## 9 comments:

In response to your last paragraph. do-notation is a notational variant of (one of*) Moggi's monadic metalanguage(s) which is indeed, precisely the internal language of a category with monads. And monads are related to lax logic. See A Judgemental Reconstruction of Modal Logic which also refers to (a fuller version of) Moggi's monadic metalanguage.

* The "Simple Metalanguage" in Notions of computation and monads.

One case where a pointful notation is really more useful is when dealing with maps to and from tensor products. If you have a map D from V to V tensor V, would you rather write

(D tensor Id) (D (v))

or

let (x,y) = D v; (a,b) = D x in (a,b,y)

?

(Actually, what I really find most helpful is neither of these two, but rather a graphical notation with graphs.)

Dylan, that's exactly why I like to use the vector space monad! I guess "Sweedler notation" is an example of what I'm talking about too. But what motivated me to write this post were the examples where the arrows aren't functions.

Derek,

Should have known you'd know a relevant paper! I'll check it out when I get a chance.

Dan, I just wanted to mention that arrows in any (small enough) category can be seen as actual functions, or families of functions. The Grothendieck construction embeds (non-fully) any small category into Sets, which turns morphisms into functions, while (the more reasonable) Yoneda embedding embeds (fully) any small category into presheaves, which turns morphisms into families of functions (natural transformations). I am sure you could write a nice Haskell post about this, if you haven't yet :-)

Andrej,

Even without any category theory it's easy to assign sets and functions for the example of posets. But of course the elements you get aren't necessarily the ones you're imagining as you write x :: a.

What would a category with

commutativemonads look like? I don't know how to introduce this particular commutativity into the lattice of subsets of a topological space.The meet-join example doesn't make sense as written. Did you mean something like the following? "Given f :: A -> B and g :: C -> D we can define h :: (A,C) -> Either B D as

h w = let x = fst w

y = f x

z = Left y

in z

"

i.e., h = Left . f . fst

or alternately h = Right . g . snd.

When I learned category theory, a subobject of d was defined as an equivalence class of monics with codomain d. This definition via equivalence classes is independent of the choice of representative - is this what's going on with your 'abuse of notation'?

Post a Comment