### Two Kinds of Mathematics

A recent comment by augustss made me realise that I mentally classify mathematics into two different kinds. I wonder if other mathematicians do the same - so here's your chance to post a comment saying whether or not you agree.

It seems to me that mathematical theorems are either

For example, think about a subject like algebraic topology. We start by learning about point set topology. We might define the product topology and then from that show that the projections from the product space to the original space are continuous. These are structural results. They simply tell us that we've defined the product topology in the right way, and the reason why they are the right way is that someone wanted to do topology on the cartesian product of sets and thought hard until they found a way to make this work. They (probably) didn't make a discovery like - hey, there's this amazing topology on the cartesian product that gives continuous projections. They set out to make the product topology in order to prove other stuff, like the way a silversmith or a computer scientist might make tools for themselves for a particular application.

On the other hand, people later started discovering bizarre properties of homotopy groups of spheres. Consider the Hopf fibration which is a beautiful way of building the 3-sphere out of 1-spheres. This was a genuine discovery. It's not just a trivial consequence of the topological machinery but a new result that wasn't plugged in from the beginning. That's what I mean by content.

You invent mathematical machinery, prove structural theorems about it which shows that it does what you want, and then you start discovering interesting results with the machinery. Most branches of mathematics have a mixture of both types of result. In group theory you might prove structural results like the isomorphism theorems which really just show that quotients are what you think they should be. And then you start discovering content like the fact that A

Structural theorems tend to be more universal and feel more 'invented'. Content theorems are more specific and feel more 'discovered'. And I'm really talking about a spectrum rather than a binary classification.

Most branches of mathematics have a mixture of the two types of theorem. Typically the structure theorems are used as a tool to discover content. In some sense the content is the end and the structure is the means. But category theory seems different in this regard - it seems to be mainly about structure. Every time I read category theory I see ever more ingenious and abstract tools but I don't really see what I think of as content. What's hard in category theory is getting your head around the definitions and often the theorems are trivial consequences of these. For example consider the Yoneda lemma as described here. This is a pretty challenging result for beginning category theorists. And yet these notes say: "once you have thoroughly understood the statement, you should find the proof straightforward". This exactly fits with what I'm getting at: the theorem is an immediate consequence of the definitions and the proof essentially shows that the machinery does what you'd expect it to do.

Computer science (at least the more formal parts) is a lot like structural mathemtics. It's about inventing machinery (lambdas, coalgebras, state machines) and then proving that machinery does what you expect (diamond property of reductions in lambda calculus etc.). If we compare with group theory the differences become apparent. A mathematician might consider groups in general (general machinery), specialisations of groups such as rings and fields (more machinery) and then specific groups like the Monster or the alternating groups (content). A computer scientist will look at things like untyped lambda calculus, untyped lambda calculus, lambda calculus with this or that extension, but they tend not to look at specific instances in the way that there are specific groups.

Very rarely there is a content theorem that follows from the structural theorems about the tools of computer science. Here's one. 7 binary trees are isomorphic to one. This is a really neat bit of content. This isn't a direct consequence of the definition of trees. When trees were invented nobody thought "what structure can I invent that captures the notion that 7 of them is isomorphic to just one?". This is a discovery, a new result that wasn't plugged in from the beginning.

I guess there is another way to view category theory. As it is the mathematics of structure then in a sense the structure of other branches of mathematics is the content of category theory. It's probably fair to say that one person's content may be another person's content. But it's not unusual for a lecturer to say of their course "the first few weeks will be spent setting up the definitions and machinery and then we'll get onto the real content..."

And now I can reply to augustss. "Real mathematics" is mathematics where from time to time you prove content. And the reason why I said "it's time to get back to some real mathematics" is that much as I've been enjoying computer science lately, I need to see some non-structural content from time to time.

It seems to me that mathematical theorems are either

*structural*theorems or*content*. By structural I mean that they set up some machinery and show that the machinery does what we expected it to do, and content refers to 'surprising' facts about the mathematical universe. (I think 'structural' is a good choice of word here. 'Content' doesn't feel quite right but the best I could come up with.)For example, think about a subject like algebraic topology. We start by learning about point set topology. We might define the product topology and then from that show that the projections from the product space to the original space are continuous. These are structural results. They simply tell us that we've defined the product topology in the right way, and the reason why they are the right way is that someone wanted to do topology on the cartesian product of sets and thought hard until they found a way to make this work. They (probably) didn't make a discovery like - hey, there's this amazing topology on the cartesian product that gives continuous projections. They set out to make the product topology in order to prove other stuff, like the way a silversmith or a computer scientist might make tools for themselves for a particular application.

On the other hand, people later started discovering bizarre properties of homotopy groups of spheres. Consider the Hopf fibration which is a beautiful way of building the 3-sphere out of 1-spheres. This was a genuine discovery. It's not just a trivial consequence of the topological machinery but a new result that wasn't plugged in from the beginning. That's what I mean by content.

You invent mathematical machinery, prove structural theorems about it which shows that it does what you want, and then you start discovering interesting results with the machinery. Most branches of mathematics have a mixture of both types of result. In group theory you might prove structural results like the isomorphism theorems which really just show that quotients are what you think they should be. And then you start discovering content like the fact that A

_{n}is simple for n≥5.Structural theorems tend to be more universal and feel more 'invented'. Content theorems are more specific and feel more 'discovered'. And I'm really talking about a spectrum rather than a binary classification.

Most branches of mathematics have a mixture of the two types of theorem. Typically the structure theorems are used as a tool to discover content. In some sense the content is the end and the structure is the means. But category theory seems different in this regard - it seems to be mainly about structure. Every time I read category theory I see ever more ingenious and abstract tools but I don't really see what I think of as content. What's hard in category theory is getting your head around the definitions and often the theorems are trivial consequences of these. For example consider the Yoneda lemma as described here. This is a pretty challenging result for beginning category theorists. And yet these notes say: "once you have thoroughly understood the statement, you should find the proof straightforward". This exactly fits with what I'm getting at: the theorem is an immediate consequence of the definitions and the proof essentially shows that the machinery does what you'd expect it to do.

Computer science (at least the more formal parts) is a lot like structural mathemtics. It's about inventing machinery (lambdas, coalgebras, state machines) and then proving that machinery does what you expect (diamond property of reductions in lambda calculus etc.). If we compare with group theory the differences become apparent. A mathematician might consider groups in general (general machinery), specialisations of groups such as rings and fields (more machinery) and then specific groups like the Monster or the alternating groups (content). A computer scientist will look at things like untyped lambda calculus, untyped lambda calculus, lambda calculus with this or that extension, but they tend not to look at specific instances in the way that there are specific groups.

Very rarely there is a content theorem that follows from the structural theorems about the tools of computer science. Here's one. 7 binary trees are isomorphic to one. This is a really neat bit of content. This isn't a direct consequence of the definition of trees. When trees were invented nobody thought "what structure can I invent that captures the notion that 7 of them is isomorphic to just one?". This is a discovery, a new result that wasn't plugged in from the beginning.

I guess there is another way to view category theory. As it is the mathematics of structure then in a sense the structure of other branches of mathematics is the content of category theory. It's probably fair to say that one person's content may be another person's content. But it's not unusual for a lecturer to say of their course "the first few weeks will be spent setting up the definitions and machinery and then we'll get onto the real content..."

And now I can reply to augustss. "Real mathematics" is mathematics where from time to time you prove content. And the reason why I said "it's time to get back to some real mathematics" is that much as I've been enjoying computer science lately, I need to see some non-structural content from time to time.

Labels: mathematics

## 10 Comments:

One way of injecting some specificity into a topic treated by category theory is by looking at initial or free things. For example, the free monoidal category with duals on one object is the category of tangles. Then any monoidal functor to a category of this type will give you an invariant of tangles (and so knots). This is beginning to feel contentful. Even your Hopf fibration will be found nestled fairly low in amongst the n-categories (see p. 29 of

this).

Another vaguely 'specific' thing that pops out of category theory that I came across recently is this formulation of the Thompson F group.

There's also a curious 'contentful' fact that I wrote in this Wikipedia entry. It's so unusual for a statement in categorical language (plus a mention of Set Theory) to pin down something as specific as a number like 6 that I felt it warranted a mention on that page. Curiously one of the authors you cite, John Baez, wrote an article about this odd feature of 6 (see references in that Wikipedia entry), though he didn't mention the categorical formulation of the statement. (And JB wrote about the 7-trees in 1 example. There's a pattern here...)

Hmmm...I think that the most curious mathematics is often at the meeting point of the specific and the general. In particular, general definitions that still manage to single out specific objects.

OK, I understand what you're saying.

And in some sense CS is about structural content. You want to make an artefact (a program) that you know does something (fulfills its specification), so there's no content there.

Still, there's some content now and then, like Reynold's parametricity theorem.

Your comment about the lack of "content" theorems arising out of structure in computer science seems a bit misplaced, or at least ignores much of algorithms and computational complexity research.

To a large degree, algorithms is "all content"; structures are minimal, and most theorems are just surprising results. In complexity theory, there is more structure, but even there, there are many many surprising results, dating back to NP-Completeness itself, all the results about space and time and randomness, and many many more.

augustss,

I don't know the Reynolds parametricity theorem but now you've mentioned it it's inevitable that I'll try to read up on it now.

Actually, this is great. Now people are going to post all kinds of interesting examples of computer science for me to read. Contentious statements are great for prompting people to say interesting things :-)

You can read Phil Wadler's paper "Theorems for free!".

(Sorry about the multiple posts, my browser and I were having a disagreement. :) )

Could "theorems about emerging properties" be a better word for the "content" theorems?

zorbid: your phrase captures some of what I want to say - but it's a bit of a mouthful!

I can see and agree with what you're saying, sorta. But I'm not convinced that a theorem is either 'structural' or 'content'. Many theorems I have seen were beautiful in 'content', but then found later on to see how useful they were in 'structure' (think Euler's formula for example). What I'm saying is that many theorems are both structural and content. Moreover, many theoroms are neither structural nor content. These are usually the trivial corollaries, sometimes not worth mentioning.

Maybe we should just grade each theorem out of 10, twice...once for 'structure' and once for 'content'.

Just kidding.

I am no mathematician, but when I was in high school I remember thinking that the two kinds of math were descriptive and prescriptive. So calculations that described natural phenomena like geometry, motion over time etc. were descriptive. However things like compound interest prescribe something like what you are willing to pay in order to have the use of someone's money.

Post a Comment

## Links to this post:

Create a Link

<< Home