# A Neighborhood of Infinity

## Saturday, February 02, 2008

### Purely functional recursive types in Haskell and Python

"""
This post is simultaneously Python and literate Haskell.

There is a certain truth to Greenspun's tenth law of programming. A Python project I was developing at work has slowly mutated into a compiler for a programming language without me planning it that way. Usually (I assume) compilers parse their input and construct an AST which is passed to the compiler proper. My code didn't have an AST, just a bunch of lambdas. I realised that I'd actually come across a real world example of what Wadler was talking about in Recursive Types for Free!.

In Haskell, the foldr function reduces a list using a binary function and some initial value. Suppose the function is called a and the initial value is b. Take a list, for example [1,2,3]. Now write it without using list notation, directly in terms of its constructors. Ie. 1:(2:(3:[])). foldr replaces (:) by a and [] by b. So this becomes a(1,a(2,a(3,b))). The best known example is a=(+) and b = 0 so we get 1+2+3+0 and hence the sum of the values in the list. Here is how we'd use foldr in Haskell:
> x = foldr (+) 0 [1,2,3]

The interesting thing is that anything you might want to know about a (finite) list can be extracted using foldr. There is a sense in which it the universal function on lists and all other functions can be factored through it. For example, we can implement head and tail as follows
> head = foldr const undefined> tail x = let Just (_,t) = foldr tailHelper Nothing x in t where>   tailHelper x Nothing      = Just (x,[])>   tailHelper x (Just (y,z)) = Just (x,y:z)

So if x is a list, \a b -> foldr a b x tells you everything you could want to know about the list. In other words, you can completely replace the list itself with functions like this. In fact, we can replace the list constructors with functions that build such functions:
> nil a b = b> cons h t a b = a h (t a b)

We can use nil and cons just like [] and (:). In fact, given an element defined by
> y = cons 1 (cons 2 (cons 3 nil))

We can convert it to a conventional list via
> z = y (:) []

So foldr embeds a list as a function.

We can write the same thing in Python. (Note that Python already has a variation of foldr, called reduce.)
"""def foldr(a,b,l):    if l==[]:        return b    else:        return a(l[0],foldr(a,b,l[1:]))print foldr(lambda x,y:x+y,0,[1,2,3])"""

It's surprisingly easy to implement cons and nil in Python too:
"""def nil(a,b):    return bdef cons(h,t):    def _cons(a,b):        return a(h,t(a,b))    return _consl = cons(1,cons(2,cons(3,nil)))print l(lambda x,y:x+y,0)print l(lambda x,y:[x]+y,[])"""

Folds can be generalised to any recursive type, not just lists. (Stricly speaking I mean recursive rather than corecursive types. Folds aren't appropriate for infinite structures.) Note how for lists, foldr takes two arguments besides the list: a two argument function and a zero argument function. Applying a fold simply replaces the list constructors (:) and [] with these functions. Generalised folds do something similar: each constructor gives rise to an argument to the fold and when the fold is evaluated, each constructor is replaced with the appropriate function. Here's an example:

Now consider a simple expression type in Haskell:
> data Expr = X | Const Int | Binop (Int -> Int -> Int) Expr Expr

This is a recursive type so it has a generalised fold associated with it. This fold will take three arguments, one for each of X, Const and Binop, and each one will take the same number of arguments as the constructor. Here it is:
> efold :: a -> (Int -> a) -> ((Int -> Int -> Int) -> a -> a -> a) -> Expr -> a> efold x _ _ X = x> efold _ c _ (Const a) = c a> efold x c b (Binop f lt rt) = b f (efold x c b lt) (efold x c b rt)

efold simply replaces each constructor with an application of the matching function recursively through the entire Expr.

Anything you might want to do to an Expr can be done using efold, and many things you might naturally want to do with an Expr are particularly easy to write using it. Here the functions to (1) evaluate the expression for X equal to some Int, and (2) to determine whether or not an expression is free of references to X:
> eval x e = efold x id id e> freeX e = efold False (const True) (const (&&)) e> identity e = efold X Const Binop e

Now we can do the same thing we did above, replace the Expr structure with its corresponding fold. And again, I'm implementing it in Python rather than Haskell:
"""def X():    def _X(x,c,b):        return x    return _Xdef Const(a):    def _Const(x,c,b):        return c(a)    return _Constdef Binop(f,l,r):    def _Binop(x,c,b):        return b(f,l(x,c,b),r(x,c,b))    return _Binopdef eval(x,e):    return e(x,lambda x:x,lambda f,l,r:f(l,r))def freeX(e):    return e(False,lambda x:True,lambda f,l,r:l and r)"""

So we have translated the Haskell algebraic type Expr into functional expressions in Python. Here are some examples of their use:

Evaluating X, 2 and X+2 at X=2:
"""print eval(3,X())print eval(3,Const(2))print eval(3,Binop(lambda x,y:x+y,X(),Const(2)))"""

Testing whether 10-2 and X()+2 are free of references to X():
"""print freeX(Binop(lambda x,y:x-y,Const(10),Const(2)))print freeX(Binop(lambda x,y:x+y,X(),Const(2)))"""

You can even implement a version in a blend of functional and OO style:
"""class X:    def __call__(self,x,c,b):        return xclass Const:    def __init__(self,a):        self.a = a    def __call__(self,x,c,b):        return c(self.a)class Binop:    def __init__(self,f,l,r):        self.f = f        self.l = l        self.r = r    def __call__(self,x,c,b):        return b(self.f,self.l(x,c,b),self.r(x,c,b))"""

This can sometimes be an inefficient style of programming, especially so in a strict language. Look again at tail for the cons/nil lists. But many uses are quite efficient, and folds capture a very common design pattern.

When I wrote this post a while back I left out mention of what the main point of the paper was. This post fixes that.

Wadler's paper also describes a dual version of this for codata such as streams. But as far as I understand it's not very interesting.

It's interesting that theory about static types has something to say about programming in a dynamically typed programming language.

Just so you know, my work project doesn't look anything like the code above.

Oh...and I guess you could say this was a form of the visitor pattern. Ugh. It's hideously complicated in C++.
"""

Indeed, "many things you might naturally want to do with an Expr are particularly easy to write using" efold, and it may even be more efficient because you can eliminate tagging overhead. Jacques Carette, Oleg Kiselyov, and I have been writing partial evaluators, CPS transformers, and other interpreters in this way, to good effect.

suppressingfire said...

Good post. I think what's missing is a discussion of the connection with lambda calculus, since all you're really describing is a couple of implementations in concrete languages of standard l-c types.

Mikael said...

Now I -really- want to see (or write) a discussion of recursive data types and generalized folds in the language of operads! What you said a generalized fold is looks to me very much like the construction of a free operads on a set of operations. (and/or possibly representations of such)

sigfpe said...

Mikael,

Well there's already a nice description in terms of initial F-algebras but I think I see what you mean.

sigfpe said...

> This representation is almost the same as in §1.1, only written with lowercase identifiers.

At one point I started writing exactly the same sentence when composing my blog post. I'll have to read the rest of that paper, it seems interesting.

Anonymous said...

One quick point. If you want to test for an empty list, it is better to do:

if not l:

if l==[]:

Jeremy Gibbons said...

You say that "Wadler's paper also describes a dual version of this for codata such as streams. But as far as I understand it's not very interesting." I disagree; I think the dual is very interesting! Whereas the inductive version captures algebraic datatypes as purely functional recursive types, the coinductive version captures abstract datatypes. I exploit this in my paper Unfolding Abstract Datatypes. (I'd explain here, but the margin is too narrow.)

Anonymous said...

Minor typo: "Evaluating X, 2 and X+2 at X=2:" should be "Evaluating X, 2 and X+2 at X=3".

Thanks for the post, I enjoyed it very much.

FutureNerd said...

"""
(I'm leaving a comment, heh.)

I thought a couple of your examples would have been clearer by first saying

"""
def plus(a,b):
____return a+b
"""

(Wanting Python to have a syntax for that is the kind of pressure that created C++!)

Funny omission...

...each constructor is replaced with the appropriate function. Here's an example:

Now consider a simple expression type in Haskell:

...as in, "Wanna see me do it again?" or, "Yes, in Python it's that simple!" I do get the idea, though.

Is there an efficiency, readability or code style reason why you use internal defs with just a return line, instead of lambdas? I.e., (why) do you prefer

"""
def cons(h,t):
____def _cons(a,b):
________return a(h,t(a,b))
____return _cons

"""

to

"""
def cons(h,t):
____return lambda a,b: a(h,t(a,b))
"""
?

I wonder whether there's a compiler somewhere that recognizes nested lambdas as conses or sequential arrays (I infer that Haskell doesn't?).

Interesting stuff, I got here from your "Programming with impossible functions."
"""

sigfpe said...

FutureNerd,

Can't remember why I didn't use lambdas. Probably the sort of paranoia expressed here: http://math.andrej.com/2009/04/09/pythons-lambda-is-broken/

FutureNerd said...

sigfpe--

Aaaugh, scary but interesting post! However, the problem discussed there
* isn't with lambdas per se,
* it's with a closure (lambda or def) referring to loop variables in the enclosing function
* so doing the closure with def doesn't change that situation
* anyway it doesn't come up in your situation (referring to the enclosing function's parameters)

My sense even after that post is that Python's closures are about as pure or impure as Scheme's (unless you think of the tail-call memory leak as impurity).

Apply paranoias sparingly!

Any chance you might be able to put in an entry for http://rosettacode.org/wiki/Pattern_matching; and http://rosettacode.org/wiki/Proof? Maybe basing it on what has been done for Tcl/Haskel?

It would be appreciated.

Thanks.

Steven Shaw said...

Thanks for this post, it helped me to understanding the Empty/Cons referred to in this recent post

http://apocalisp.wordpress.com/2011/05/30/imperative-vs-functional-programming/

I'm wondering if the more complex example with the Expr AST is missing something since it still uses the algebraic type. i.e. it isn't represented simply by an encoding of functions.