> {-# LANGUAGE Rank2Types,ExistentialQuantification #-}

Pick a type. Let's call it

`X`. Now consider the type

forall a . (a -> X,a) -> X

What can we say about elements of this type? There's an obvious example of such a function,

`eval`that applies the first element of the pair to the second.

> eval (f,x) = f x

It's also easy to construct others. What property do they all share? Well there's a nice way to answer this question. We can use Janis Voigtländer's Free Theorem Generator. After a bit of simplification it tells us that if

`f`is in

forall a . (a -> X,a) -> X

then for any compatible

`g`

f (x,g y) = f (x . g,y)

We can apply any function we like to the second argument and we get exactly the same result by pre-composing with the first argument.

This is a pretty strong property - it puts a big constraint on what

`f`can do. It holds because any function of type

`forall a . (a -> X,a) -> X`maps to a type

`X`that makes no reference to the quantified

`a`. It can't let any information about the type

`a`escape. And this means that

`f`has to somehow eliminate an element of

`a`and and element of

`a->X`. There's only one non-trivial way to do that: provide the former to the latter as a function argument. So

`f`must factor as

`h . eval`for some function

`h`. The free theorem comes from the fact that

`eval (x,g y) = eval (x.g,y)`. Or,

`f`could be the constant function, but that still factors in the same way.

Let's try something slightly more complex. Consider the type

forall a . (a -> X,[a]) -> X

Again we can ask the free theorem generator to give us a property. We're told that if

`f`is of this type, then for any compatible

`g`

f(x,map g y) = f(x . g,y)

The crucial point this time is that in order to eliminate the

`[a]`we have to use

`fmap`to apply the function of type

`a -> X`. So any function of this type must factor through

`fmap`, and that's why the free theorem follows.

Now it's time to put this in a more general framework. We can write both of the above examples as elements of type

`forall a . S a a -> X`where

`S`is some type constructor. In both cases we also have that

`S a`is a functor. But it's also a cofunctor in its first argument. Intuitively this says that

`S a b`contains or produces

`b`'s but consumes

`a`'s. Here's a class to express this:

> class Difunctor h where

> lmap :: (b -> a) -> h a c -> h b c

> rmap :: (c -> d) -> h a c -> h a d

(I made up the word Difunctor.)

For (co)functoriality we insist on the laws

lmap (f . g) = lmap g . lmap f

rmap (f . g) = rmap f . rmap g

We can make both of the examples above instances:

> data Ex1 x a b = Ex1 (a -> x) b

> instance Difunctor (Ex1 x) where

> lmap f (Ex1 g x) = Ex1 (g . f) x

> rmap f (Ex1 g x) = Ex1 g (f x)

> data Ex2 x a b = Ex2 (a -> x) [b]

> instance Difunctor (Ex2 x) where

> lmap f (Ex2 g x) = Ex2 (g . f) x

> rmap f (Ex2 g x) = Ex2 g (map f x)

Our free theorems were essentially about this type:

> type DiNatural s x = forall a . (s a a -> x)

and in both cases the free theorem could be written as

f . lmap g == f . rmap g

This is a highly non-trivial result. Look at the preconditions for this theorem: they say something about each of the arguments to

`s`individually. And yet we deduce that the two arguments are in fact intimately connected. We can apply

`g`using either

`lmap`or

`rmap`and get the same result. This property is known as dinaturality. More precisely, if for some

`X`,

`f :: s a a -> X`, where

`s`is a difunctor, then

`f`is dinatural if for all compatible

`g`,

f . lmap g == f . rmap g

We have this theorem: if

`s`is an instance of the

`Difunctor`class (obeying its laws) then for any

`X`, elements of

`DiNatural s X`are dinatural.

(Compare with the definition of naturality:

`f . fmap g = fmap g . f`.)

I'm tempted to call this "Dinaturality for Free" but there's already a paper by that name and I don't know what it's about. And note that I only claim it's a theorem. I don't actually know how to prove this uniformly for all difunctors, but I'd stake a beer on it. At least as long as we don't do any weird haskell coding (so our free theorems are always valid) and as long as we restrict ourselves to functions that always terminate.

In the examples I gave above the reason why this holds is related to the fact that any function of the given type can be factored as something following an evaluation type function. More generally, if

`s`is a dinatural then if there is a single

`Y`, and function

`i :: s a a -> Y`, such that every function

`f :: s a a -> X`, for any

`X`, can be factored as

`f = h . i`, then

`Y`is said to be the coend of

`s`.

There are different approaches to computing a coend. Above I've used inspection. The coend of

`(a -> X,a)`is

`X`. But there's also a kind of cheating approach where we can use an existential type to get the type uniformly for all dinaturals:

> data Coend s = forall a . Coend (s a a)

For an explanation of why that works, see Edward Kmett's post on Kan Extensions. For this example, this means we expect the types

`t`and

`Coend (Ex1 t)`to be isomorphic. Here is the isomorphism and its inverse:

> iso :: Coend (Ex1 t) -> t

> iso (Coend (Ex1 f x)) = f x

> iso' :: t -> Coend (Ex1 t)

> iso' x = Coend (Ex1 (const x) ())

I'll leave the proof that

`iso`and

`iso'`are mutual inverses to you.

I hope that gives some idea of what a coend is. Informally it captures the method by which a dinatural transformation of type

`forall a . s a a -> X`is able to eliminate the quantified

`a`. If you look through the Haskell libraries you'll find many dinaturals (or at least things that can be made dinatural through the use of

`uncurry`).

This code and description was inspired by the discussion at nLab.