### Monads, Kleisli Arrows, Comonads and other Rambling Thoughts

(Below I use 'arrow' to mean an arrow in a category (or an ordinary Haskell function) but an Arrow (with capital A) is one of the objects defined by Hughes. I'm not really going to say much about Arrows.)

A while back I looked at a paper on comonads and streams but couldn't really see what comonads were offering. Just recently, I was thinking about a design for a dataflow language for microcontrollers and found that Arrows captured some of the functionality I wanted. But after writing some Haskell code I realised that I was writing the same 'glue' over and over again. So it was becoming clear that arrows were too general and I needed a class that filled in the glue for me automatically. I wrote the code and it looked familiar. I realised that I had rediscovered comonads and they now seemed entirely natural.

First, let me review monads. There are countless web sites that purport to be introductions to monads but my way of looking at them seems a little different to most of those accounts. (It's not in any way unusual, just not in the beginner's guides.) I like to think in terms of Kleisli arrows. I find this perspective unifies the different applications of monads such as encapsulating side effects or doing simple logic programming using the list monad.

A Kleisli arrow is simply an arrow (ie. in Haskell, a function) a→m b where m is a monad. What monads allow you to do is compose these things. So given f::a→m b and g::b→m c there is an arrow a→m c. To me this is the raison d'être of monads but as far as I can see, the standard interface defined in Control.Monad doesn't actually provide a name for this composition function. (Though the Control.Arrow interface does provide a Kleisli arrow composition function.)

So here's why I like to think in terms of Kleisli arrows: Consider what is almost the prototypical example of a Haskell monad - the Writer monad. Suppose you have something that is conceptually a function f::a→b but you want it to output a list of items to a log as a side effect. In a functional programming language there is no way out - you're pretty well forced to return the thing you want to log along with the object of type b. If your log is a list of objects of type d, then you need to modify f to return an object of type (b,[d]) instead of b. But here we have a catch, if we have f::a→(b,[d]) and g::b→(c,[d]) (ie. conceptually a function of type b→c producing a log of type [d]) then we want to compose these things. But the argument to g is no longer the return type of f. We need some plumbing to concatenate these functions. In this case the plumbing needs to take the output of f, split off the log keeping it for later, pass the remainder to g, and then concatenate the log from f before the log of g. And this is what monads do, they provide the plumbing. (If you knew nothing about monads and wrote the obvious code to plumb these things together, concatenating the logs, probably the first thing you wrote would look just like part of the definition of the Writer monad, except the Writer monad is generalised to monoids instead of lists.)

Let's work through the details of composing Kleisli arrows: we want to compose f::a→m b and g::b→m c. The obvious thing to do is add a gluing map between them m b→b. But that's uninteresting as it just throws away the fanciness. Instead we use the fact that m is a functor (part of the mathematician's definition of monad) to lift g to a function m b→m (m c). This now composes nicely with f but the catch is that we end up with something twice as fancy. However, part of the definition of monads is that there is a map m (m c)→m c. (Twice as fancy is still just fancy.) And that can now be used to finish off the composition.

Consider another example of a monad, the list monad. The idea is that we want a function to return multiple values, not just one. So instead of f::a→b we have f::a→[b]. But suppose we have another one of these things, g::b→[c]. How do we concatenate these? Conceptually what we want to do is run g on each of the return values of f in turn and then collect up all of the results in one big list. This is exactly what the list monad does, it provides the plumbing to concatenate functions in this way.

In both cases we have f::a→m b and g::b→m c and we get a function a→m c. Or more informally, monads give a way to compose functions that map ordinary types to fancy types providing the glue that allows the fancy output of one function to connect to the on-fancy input of the next. And I like to view things this way because a functional program is a concatenation of lots of functions - so it's natural to think about monads as simply a new way of building programs by concatenation of functions.

Anyway, I was thinking about stream functions. A stream function from a to b is simply a function [a]→[b]. (Strictly speaking we're only considering infinite lists - streams.) This doesn't quite fit the pattern of non-fancy→fancy, it's more like fancy→fancy. And that's what the Arrow interface allows us to do. But I'm not going to talk about Arrows here except to say that I started using them to write some stream code. But then I noticed that I was only interested in

If you followed that then you may have noticed the pattern. We want to compose two functions that map fancy types to non-fancy types to produce a new function that maps fancy types to non-fancy types. It's the exact opposite of what monads do. And this is exactly what comonads are about: they are the correct abstraction to use when writing glue for fancy-to-non-fancy functions. It all seems so natural I'm astonished to find that Control.Comonad

Let's look at the details more closely. Let's still use m to represent a comonad. We need to compose f::m a→b and g::m b→c. m is a functor (by definition of comonad) so we can lift f to a function of type m (m a)→m b. This composes directly with g. And to finish it off we use the function m a→m (m a) provided by the definition of a comonad.

And in even more detail for the case of (lists considered as) streams. The lift operation is simply given by the usual map function. You lift a function f by applying it to each element in the stream in turn and returning the stream of results. The function m a→m (m a) is more interesting. It maps [x1,x2,x3,...] to [[x1],[x1,x2],[x1,x2,x3],...]. In other words it maps a stream to its list of 'histories'. My use of the loaded word 'history' should be a hint about where causality comes in. If we lift a function f::[a]→b to act on this list of histories we get [f [x1],f[x1,x2],f[x1,x2,x3],...]. In other words, a comonad gives exactly what we need to work with streams.

Anyway, one of the cool things about monads is the special syntactic sugar provided by Haskell that allows us to write what looks like imperative code even though it's actually functional. I've been trying to figure out what similar sugar might look like for comonads. But I can't quite figure it out. I can see roughly what it'd look like. You'd be able to write lines of code like

so that even though it 'looks' like b is merely twice the head of a, the compiler would produce the appropriate glue to make b actuually be the stream whose head is 2*(head a). In fact, you can do something a bit like this using Arrow syntax. But I can't quite fill in the details in such a way that it nicely parallels the syntax for monads.

(Silly me...I think I've just figured it out now. The 'codo' block is different from a 'do' block because it needs to define a coKleisli arrow, not an element of the comonad. Hmmm...)

And just some final words: I believe Arrows are the wrong approach to functional reactive programming. Comonads are

A while back I looked at a paper on comonads and streams but couldn't really see what comonads were offering. Just recently, I was thinking about a design for a dataflow language for microcontrollers and found that Arrows captured some of the functionality I wanted. But after writing some Haskell code I realised that I was writing the same 'glue' over and over again. So it was becoming clear that arrows were too general and I needed a class that filled in the glue for me automatically. I wrote the code and it looked familiar. I realised that I had rediscovered comonads and they now seemed entirely natural.

First, let me review monads. There are countless web sites that purport to be introductions to monads but my way of looking at them seems a little different to most of those accounts. (It's not in any way unusual, just not in the beginner's guides.) I like to think in terms of Kleisli arrows. I find this perspective unifies the different applications of monads such as encapsulating side effects or doing simple logic programming using the list monad.

A Kleisli arrow is simply an arrow (ie. in Haskell, a function) a→m b where m is a monad. What monads allow you to do is compose these things. So given f::a→m b and g::b→m c there is an arrow a→m c. To me this is the raison d'être of monads but as far as I can see, the standard interface defined in Control.Monad doesn't actually provide a name for this composition function. (Though the Control.Arrow interface does provide a Kleisli arrow composition function.)

So here's why I like to think in terms of Kleisli arrows: Consider what is almost the prototypical example of a Haskell monad - the Writer monad. Suppose you have something that is conceptually a function f::a→b but you want it to output a list of items to a log as a side effect. In a functional programming language there is no way out - you're pretty well forced to return the thing you want to log along with the object of type b. If your log is a list of objects of type d, then you need to modify f to return an object of type (b,[d]) instead of b. But here we have a catch, if we have f::a→(b,[d]) and g::b→(c,[d]) (ie. conceptually a function of type b→c producing a log of type [d]) then we want to compose these things. But the argument to g is no longer the return type of f. We need some plumbing to concatenate these functions. In this case the plumbing needs to take the output of f, split off the log keeping it for later, pass the remainder to g, and then concatenate the log from f before the log of g. And this is what monads do, they provide the plumbing. (If you knew nothing about monads and wrote the obvious code to plumb these things together, concatenating the logs, probably the first thing you wrote would look just like part of the definition of the Writer monad, except the Writer monad is generalised to monoids instead of lists.)

Let's work through the details of composing Kleisli arrows: we want to compose f::a→m b and g::b→m c. The obvious thing to do is add a gluing map between them m b→b. But that's uninteresting as it just throws away the fanciness. Instead we use the fact that m is a functor (part of the mathematician's definition of monad) to lift g to a function m b→m (m c). This now composes nicely with f but the catch is that we end up with something twice as fancy. However, part of the definition of monads is that there is a map m (m c)→m c. (Twice as fancy is still just fancy.) And that can now be used to finish off the composition.

Consider another example of a monad, the list monad. The idea is that we want a function to return multiple values, not just one. So instead of f::a→b we have f::a→[b]. But suppose we have another one of these things, g::b→[c]. How do we concatenate these? Conceptually what we want to do is run g on each of the return values of f in turn and then collect up all of the results in one big list. This is exactly what the list monad does, it provides the plumbing to concatenate functions in this way.

In both cases we have f::a→m b and g::b→m c and we get a function a→m c. Or more informally, monads give a way to compose functions that map ordinary types to fancy types providing the glue that allows the fancy output of one function to connect to the on-fancy input of the next. And I like to view things this way because a functional program is a concatenation of lots of functions - so it's natural to think about monads as simply a new way of building programs by concatenation of functions.

Anyway, I was thinking about stream functions. A stream function from a to b is simply a function [a]→[b]. (Strictly speaking we're only considering infinite lists - streams.) This doesn't quite fit the pattern of non-fancy→fancy, it's more like fancy→fancy. And that's what the Arrow interface allows us to do. But I'm not going to talk about Arrows here except to say that I started using them to write some stream code. But then I noticed that I was only interested in

*causal*stream functions. This is a function where the nth element of the output depends only on the first n values of the input. This pattern fits many types of processing in dataflow applications such as audio processing. In order to compute the nth element of the output of a causal f::[a]→[b] we need only compute a function f::[a]→b. To compute the entire stream we repeatedly use this function to generate each element in turn. So, for example, if the input is the stream [x1,x1,x2,...] then the output is [f [x1],f[x1,x2],f[x1,x2,x3],...]. In other words a stream function is really a function f::[a]→b but we need special glue to concatenate them because the nth element of the output concatenation should look like g [f [x1],f[x1,x2],f[x1,x2,x3],...,f [x1,...,xn]].If you followed that then you may have noticed the pattern. We want to compose two functions that map fancy types to non-fancy types to produce a new function that maps fancy types to non-fancy types. It's the exact opposite of what monads do. And this is exactly what comonads are about: they are the correct abstraction to use when writing glue for fancy-to-non-fancy functions. It all seems so natural I'm astonished to find that Control.Comonad

*isn't*a part of the standard Haskell distributions.Let's look at the details more closely. Let's still use m to represent a comonad. We need to compose f::m a→b and g::m b→c. m is a functor (by definition of comonad) so we can lift f to a function of type m (m a)→m b. This composes directly with g. And to finish it off we use the function m a→m (m a) provided by the definition of a comonad.

And in even more detail for the case of (lists considered as) streams. The lift operation is simply given by the usual map function. You lift a function f by applying it to each element in the stream in turn and returning the stream of results. The function m a→m (m a) is more interesting. It maps [x1,x2,x3,...] to [[x1],[x1,x2],[x1,x2,x3],...]. In other words it maps a stream to its list of 'histories'. My use of the loaded word 'history' should be a hint about where causality comes in. If we lift a function f::[a]→b to act on this list of histories we get [f [x1],f[x1,x2],f[x1,x2,x3],...]. In other words, a comonad gives exactly what we need to work with streams.

Anyway, one of the cool things about monads is the special syntactic sugar provided by Haskell that allows us to write what looks like imperative code even though it's actually functional. I've been trying to figure out what similar sugar might look like for comonads. But I can't quite figure it out. I can see roughly what it'd look like. You'd be able to write lines of code like

codo

b <- 2*(head a) -- double the volume

c <- 0.5*head b+0.5*head (tail b) -- simple FIR filter

so that even though it 'looks' like b is merely twice the head of a, the compiler would produce the appropriate glue to make b actuually be the stream whose head is 2*(head a). In fact, you can do something a bit like this using Arrow syntax. But I can't quite fill in the details in such a way that it nicely parallels the syntax for monads.

(Silly me...I think I've just figured it out now. The 'codo' block is different from a 'do' block because it needs to define a coKleisli arrow, not an element of the comonad. Hmmm...)

And just some final words: I believe Arrows are the wrong approach to functional reactive programming. Comonads are

*much*more appropriate because they model causal functions much more closely - and causal stream functions are what FRP is all about.
## 12 Comments:

"Silly me...I think I've just figured it out now."

Before anyone is misled, I should add that what I figured out wasn't that nice. I think that a nice definition of 'codo' that is dual to 'do' is an open problem.

The link to the 'paper' seems to be out of date. Do you know of another reference as I would like to have a peruse to see where you are 'coming from' so to speak. Your comments about a reverse abstraction of a monad are extremely curious. Now I know I won't be sleeping early tonight :-)

binaryten,

The paper is "The Essence of Dataflow" by Uustalu and Vene. But the link isn't broken.

For a concrete example of a comonad check out my own "Evaluating cellular automata is comonadic" (http://sigfpe.blogspot.com/2006/12/evaluating-cellular-automata-is.html).

Well , my idea of a "codo" notation

would be to simply replace join , return and bind with their comonadic duals in the translation of do notation . Example in the stream comonad described here:

http://www.cas.mcmaster.ca/~carette/CAS706/F2006/presentations/comonads.pdf

fibo = 1 fby 1 fby

codo x <- fibo

((+) (counit x)

(counit (next x)))

It seems to be alright , it has different types from monadic counterpart :

w a <- w a and it finises with an expression of type a

hmm I was looking at making lists into a comonad, and got distracted by the fact that the empty list messes every thing up. and so I defined a infinite list data type and made that a comonad, but your join doesn't work, because those sublists are finite, so instead:

cojoin s = s : cojoin (tail s)

ie cojoin [x1,x2,x3..] = [[x1,x2,x3..],[x2,x3,x4..],[x3,x4,x5..]..]

Don't quite know what this represents though, infinite pasts maybe?

Re: nolrai_

I just had the same thought today... I think, if you are using streams, then each stream represents the _future_ from that point onward. If you want a comonad that talks about the past, try a non-empty list (i.e. list'[a] = a * (1 + list'[a])). Then your join will get a list of histories.

(Disclaimer: not my idea, see Uustalu and Vene's "Signals and Comonads".)

However, part of the definition of monads is that there is a map m (m c)→m c.Which definition?

I could not find such operation in standard Monad type class

yakov,

In Control.Monad you'll find the function join :: Monad m => m (m a) -> m a

Mathematicians usually define monads in terms of this.

I know that this is centuries old, but this may be helpful in finding the right syntax for comonads: Pfenning and Davies, 2001. The category-theoretic semantics, a monad for the "diamond" and a comonad for the "box", is discussed in this paper: Alechina et al., 2001. The box and the diamond are essentially the "m" functor used above.

In the case of monads you can form a fancy (monadic) term trivially by returning an ordinary value (this is called diamond introduction), but to consume (to eliminate the diamond) it you need a "bind" construct. For comonods, it turns out that use of fancy data (box elimination or unboxing) is trivial, but forming a piece of fancy data (box introduction or boxing) is the big deal: you have to form it without a reference to local variables.

Monad is the model for any parametric type that we know the generative structure of, so we can compose functions on lifted outputs, because the type knows how to lift (i.e. construct, generate, 'unit' or 'return') instances of its type parameter to its structure.

Comonad is the model for any parametric type that we don't know how to generate its structure, but we can observe instances of the type parameter its structure as they occur. We will only know its final structure when it is destructed, and observation ceases. We can't lift instances of its type parameter to its structure, so we can't compose functions on outputs. Instead, we can compose functions with lifted inputs (and optionally outputs, i.e. map on observations), because the type has observations.

Conceptually monad vs. comand duality is related to the duality of induction vs. coinduction, and initial vs. final (least vs. greatest) fixpoint, because we can generate structure for a type that has an initiality, but we can only observe structure until we reach a finality.

Induction and Co-induction

Initiality and Finality

Wikipedia Coinduction

I had visited this blog page before (and not completely grasped it), then I read this page again trying conceptualize the sum vs. products duality for eager vs. lazy evaluation.

Perhaps I am in error, but it appears that with lazy evaluation and corecursion, monad can be used instead of comonad, e.g. isn't it true a stream can be abstracted by a monadic list in haskell?

So dually, am I correct to interpret that laziness isn't necessary for modeling compositionality of coinductive types, when there is comonad in the pure (referential transparent) part where the composition is?

Followup to the two questions in my prior comment.

Monad can't abstract a comonad, because it has no method, m a -> a, for creating a new observation. A monad can abstract the history of prior observations. Afaics, for a language with multiple inheritance, a subtype of comonad could also be a subtype of monad, thus providing a monadic interface to the history of observations. This is possible because the comonad observation factory method, m a -> a, is impure (the state of the comonad blackbox changes when history is created from it).

Composition of functions, m a -> b, which input a comonad is pure (i.e. no side-effects, referentially transparent, declarative not imperative) where those functions are pure (e.g. they do not invoke m a -> a to create a new observation). In short, the method (m a -> b) -> m a -> m b is pure if m a -> b is.

"This is a function where the nth element of the output depends only on the first n values of the input. This pattern fits many types of processing in dataflow applications such as audio processing."Well, usually... there are exceptions.

Post a Comment

<< Home