### What does topology have to do with computability? Reloaded.

Now I’ve given the briefest of possible introduction to topology, I can get back to the issue of investigating the functions from () to ().

I introduced the idea of open sets as being the sets that can be observed by an idealised type of machine. So how about turning this notion onto itself. There are two possible outcomes from using one of our machines. It can respond “yes” or fail to respond at all. We can give these states the names ⊤ and ⊥ respectively. So the space of possible states is the set S = {⊤,⊥}, with the former meaning "yes". Now suppose we have a machine to watch machines. What are the possible observable subsets of S? Well we can easily build a machine that always responds ⊤ and we can also easily build a machine that always responds ⊥. So {} and {⊤,⊥} are observable. We can also build a machine that simply relays the state of the observed machine. That gives us the observable set {⊤}. But what about {⊥}?. Well there’s no way we can build a machine that responds with ⊤ when it sees ⊥ and with ⊥ when it sees ⊤. In order to respond ⊤ when it sees ⊥ it needs no know how long to wait before giving up. But the original definition of machine that I give says that we have no guarantee of how long a machine might take to respond. So no such machine is possible. In other words, the observable sets are {}, {⊤} and {{}, ⊤}, and this makes a topology on S. (Trivial exercise: check the axioms for this topology.)

S, equipped with this topology, is known as the Sierpinski space. There’s an interesting way to think about this. In Set Theory we have the notion of set membership, and membership is a binary valued thing. So the two element set {0,1} has a special role in that it gives a way to label the answers to the question “is x in the set y?:”. (In fact, this can be formalised nicely in category theory through the notion of a subobject classifier.) The Sierpinski space plays a similar role for sets when you only have a one-sided membership test. We can make this analogy precise:

Given a set X then any subset, U, of X, can be written as f

Given a topological space X, then any open subset U, of X, can be written as f

Now consider the continuous functions from S to S. As S has only two elements, there are only four possibilities to consider. (Almost trivial exercise: find the continuous functions. Remember, f is continuous if and only if f

One researcher making use of such ideas is Martin Escardo. When I started writing these posts I was looking at a bunch of papers by Escardo that I found hard to decipher. Since starting, however, I discovered that this document is actually a fairly easy read. So it’s not worth me saying much more except to tell you to read that. And also check out the work of Paul Taylor.

I can now give a summary of what the last three posts have all been about:

It’s tempting to think of functions in a programming language like Haskell as being like ordinary mathematical functions and try to embed types and Haskell functions directly into Set Theory as sets and functions. After all, Haskell functions don’t have side-effects and so seem to behave much like mathematical functions. But not all functions terminate in a finite time, so they aren’t quite like ordinary functions after all. We could think of them in terms of partial functions, but partial functions don’t have a very rich structure and certainly don’t automatically rule out examples like the non-continuous function from () to (). But topology *does* provide a good framework. This idea goes back a few years, to the work of people like Smyth, Plotkin and Scott. But I’ve been programming (at least as a hobby) for 30 years, and I got my PhD in Mathematics about 16 years ago, and yet I never knew that these topics were so connected. So this work isn’t getting as much publicity as it should, even among people who know a little topology and computer science.

And I’ll finish by mentioning an application. An important property that topological spaces can have is known as compactness. I can give a rough idea of what this means quite easily: consider a finite row of pigeonholes and an infinite number of letters. There is no way of placing all of the letters in the pigeon holes without one of the pigeonholes ending up with an infinite number of letters. This isn’t the case if you have an infinite row of pigeonholes. On the other hand, consider the task of placing an infinite sequence of zero sized dots on a closed interval on the real line. You won’t be forced to put two dots on the same point, but there will be at least one point in the interval where the dots are infinitely bunched together. For example, if you place dots at 1/2,1/4,1/8,1/16,... inside the interval [-1,1] then in any small interval around zero there will be infinitely many dots. There'll be such a point whatever scheme you try. So even though [-1,1] isn’t finite, it’s the ‘next best thing’. One important thing about compact sets is that there are often strategies to exhaustively search them. And that is precisely what my earlier post about exhaustive searches was discussing. So topological ideas give a nice way to get a handle on these non-obvious search algorithms, and characterise those types for which such searches are possible.

PS In my previous post I introduced the idea of 'observability' as an intuition pump for topology. I think that idea goes back to the work of Stephen Vickers. Brits might recognise that name. Check out the wikipedia page on him if you don't.

I introduced the idea of open sets as being the sets that can be observed by an idealised type of machine. So how about turning this notion onto itself. There are two possible outcomes from using one of our machines. It can respond “yes” or fail to respond at all. We can give these states the names ⊤ and ⊥ respectively. So the space of possible states is the set S = {⊤,⊥}, with the former meaning "yes". Now suppose we have a machine to watch machines. What are the possible observable subsets of S? Well we can easily build a machine that always responds ⊤ and we can also easily build a machine that always responds ⊥. So {} and {⊤,⊥} are observable. We can also build a machine that simply relays the state of the observed machine. That gives us the observable set {⊤}. But what about {⊥}?. Well there’s no way we can build a machine that responds with ⊤ when it sees ⊥ and with ⊥ when it sees ⊤. In order to respond ⊤ when it sees ⊥ it needs no know how long to wait before giving up. But the original definition of machine that I give says that we have no guarantee of how long a machine might take to respond. So no such machine is possible. In other words, the observable sets are {}, {⊤} and {{}, ⊤}, and this makes a topology on S. (Trivial exercise: check the axioms for this topology.)

S, equipped with this topology, is known as the Sierpinski space. There’s an interesting way to think about this. In Set Theory we have the notion of set membership, and membership is a binary valued thing. So the two element set {0,1} has a special role in that it gives a way to label the answers to the question “is x in the set y?:”. (In fact, this can be formalised nicely in category theory through the notion of a subobject classifier.) The Sierpinski space plays a similar role for sets when you only have a one-sided membership test. We can make this analogy precise:

Given a set X then any subset, U, of X, can be written as f

^{-1}({1}) where f is a function f:X->{0,1}. Conversely, any such function gives rise to a subset.Given a topological space X, then any open subset U, of X, can be written as f

^{-1}({⊤}) where f is a continuous function f:X->S. Conversely, any such function gives rise to an open subset.Now consider the continuous functions from S to S. As S has only two elements, there are only four possibilities to consider. (Almost trivial exercise: find the continuous functions. Remember, f is continuous if and only if f

^{-1}(U) is open for all open U.) It turns out there are precisely three functions. They are essentially the same functions that I listed here for the purely lazy Haskell case. So at least for this simple case, the computable functions are precisely the continuous ones. And that is now my second answer to the question a month or two ago. By assigning the correct topology to your types you can arrange that the computable functions are precisely the continuous functions. At this point you can turn the tools of topology to computer science and you find that terms like*discrete*,*Hausdorff*and*compact*, which are all standard terms from topology defined inn terms of open sets, all have computational content. And of course this machinery generalises from the simple example S to any other type. (There is a proviso due to the fact that in topology, arbitrary unions of open sets must be open, but this doesn’t carry over directly without a little bit of tweaking.)One researcher making use of such ideas is Martin Escardo. When I started writing these posts I was looking at a bunch of papers by Escardo that I found hard to decipher. Since starting, however, I discovered that this document is actually a fairly easy read. So it’s not worth me saying much more except to tell you to read that. And also check out the work of Paul Taylor.

I can now give a summary of what the last three posts have all been about:

It’s tempting to think of functions in a programming language like Haskell as being like ordinary mathematical functions and try to embed types and Haskell functions directly into Set Theory as sets and functions. After all, Haskell functions don’t have side-effects and so seem to behave much like mathematical functions. But not all functions terminate in a finite time, so they aren’t quite like ordinary functions after all. We could think of them in terms of partial functions, but partial functions don’t have a very rich structure and certainly don’t automatically rule out examples like the non-continuous function from () to (). But topology *does* provide a good framework. This idea goes back a few years, to the work of people like Smyth, Plotkin and Scott. But I’ve been programming (at least as a hobby) for 30 years, and I got my PhD in Mathematics about 16 years ago, and yet I never knew that these topics were so connected. So this work isn’t getting as much publicity as it should, even among people who know a little topology and computer science.

And I’ll finish by mentioning an application. An important property that topological spaces can have is known as compactness. I can give a rough idea of what this means quite easily: consider a finite row of pigeonholes and an infinite number of letters. There is no way of placing all of the letters in the pigeon holes without one of the pigeonholes ending up with an infinite number of letters. This isn’t the case if you have an infinite row of pigeonholes. On the other hand, consider the task of placing an infinite sequence of zero sized dots on a closed interval on the real line. You won’t be forced to put two dots on the same point, but there will be at least one point in the interval where the dots are infinitely bunched together. For example, if you place dots at 1/2,1/4,1/8,1/16,... inside the interval [-1,1] then in any small interval around zero there will be infinitely many dots. There'll be such a point whatever scheme you try. So even though [-1,1] isn’t finite, it’s the ‘next best thing’. One important thing about compact sets is that there are often strategies to exhaustively search them. And that is precisely what my earlier post about exhaustive searches was discussing. So topological ideas give a nice way to get a handle on these non-obvious search algorithms, and characterise those types for which such searches are possible.

PS In my previous post I introduced the idea of 'observability' as an intuition pump for topology. I think that idea goes back to the work of Stephen Vickers. Brits might recognise that name. Check out the wikipedia page on him if you don't.

## 11 Comments:

Hi! The link to your previous post is broken (some unwanted characters got into the href).

Thanks for the introduction to topology. It was so "elegant", linking automaton theory (the basics of it) with topology. I'm very much interested in maths, but most of these "modern" theories (like topology, category theory, etc..) don't really have any good introduction. (At least I don't know of any.)

The links to the paper by Escardo and Paul Taylor aren't working but apart from that, thanks for the great post. I'm really enjoying your expositions.

Every link's broken? Either I've had a complete mental breakdown and forgotten the most basic HTML, or blogger.com is acting up.

But not all functions terminate in a finite time, so they aren’t quite like ordinary functions after all.I hardly know enough to be dangerous, let alone reliable, on this topic, but it seems to me like you are confusung functions with algorithms. A certain algorithm used to compute a result in a particular way may not terminate, but mathematically speaking, it would be possible, for example, to provide answers by pulling the values out of a prebuilt map of very large size.

It is not about terminating/nonterminating, but rather that the machine must be incomplete and finite so there are certain functions for which it must answer "I don't know". Non-termination is one way of doing that if it cannot be better programmed to avoid situations beyond the limits of the computer (like even simply terminating after a maximum time limit and returning the I don't know answer.

Anonymous,

That's kind of the point. There's a long mathematical tradition of reasoning about about functions and it'd be nice to use that in programming. 'Functions' in imperative programming languages aren't functions so it's hard to apply the method there. In 'functional' programming, the things we call 'functions' still aren't quite functions in the usual sense, for the reasons you point out. But they look like something so similar to mathematical functions much of the time, so it'd be nice if there were a fix. Well this is one fix: reinterpret types as topological spaces then 'functions' are continuous functions, even when the underlying algorithm doesn't terminate. So even though we're really talking about algorithms, we can reason about them in a functional way.

You could even use a 'total' programming language where every 'function' is guaranteed to terminate. When that's the case, 'functions' become functions.

Thanks for an interesting post. To nitpick, you say:

"At this point you can turn the tools of topology to computer science and you find that terms like discrete, Hausdorff and compact, which are all standard terms from topology defined inn terms of open sets, all have computational content."

Apparently the Hausdorff space is not very interesting; take a look at Alex Simpson's lecture notes (l3.pdf).

The analogy between observability and open sets, whilst nicely treated in Steven Vickers' book, pre-dates his involvement in the subject. Michael Smyth played a large part in it, though the names Abramsky, Dijkstra, Floyd, Hoare, Plotkin and Scott also come to mind.

Alex Simpson's "lecture 3" covers similar ground to this blog, but I see no mention of Hausdorffness in it. Certainly cpos (with _|_) are not Hausdoff, except trivially.

However, a space X is discrete or Hausdorff iff the diagonal X subset XxX is respectively open or closed. In these cases, there is a corresponding map to the Sierpinski space. For a discrete space it is called = (equality) and for a Hausdorff space # (inequality or apartness).

In particular, the real line is Hausdorff but not discrete. Does Peter think that this is not an important space? My paper (click on my name above). is about this space.

Paul: Sorry, I am not a toplogist of any stripe whatsoever, so far be it for me to argue with you.

To clarify what I meant: in the pointed-CPO semantics that are occupying my brain at the moment, the Hausdorff topology is not interesting (according to Alex Simpson, lecture notes #3, bottom of p3). I drew the conclusion, without thinking too much harder, that for the traditional notions of computation (e.g. over the natural numbers, not the exact reals) the Hausdorff spaces were not interesting. By all means correct me if that's wrong.

In any case I wanted to draw attention to those lecture notes, which are a nice (formal) complement to sigfpe's post.

Shouldn't "the observable sets are {}, {⊤} and {{}, ⊤}" be "the observable sets are {}, {⊤} and {⊤,⊥}"

Shouldn't "the observable sets are {}, {⊤} and {{}, ⊤}" be "the observable sets are {}, {⊤} and {⊤,⊥}"?

Sorry, Peter, I didn't mean to show you up as the class dunce. Dan (sigpfe) and Alex are presenting elementary material in a tutorial fashion that is 20-30 years old. My interjections from the back of the classroom are just meant to point out how this connects with more contemporary and advanced ideas. The URL above is my home page, for a change.

Post a Comment

<< Home