Friday, May 26, 2006

Defining the Reals

There are a number of schemes for constructing the real numbers starting from set theory. The most common approaches involve constructing the rationals from the ordinal integers and then either forming Dedekind cuts or Cauchy sequences of rationals. But I recently came across an interesting (and fairly new) scheme that I had never seen before. And instead of going via the rationals it jumps straight from the integers to the reals in a simple and intuitive way.



Suppose you have been given the task of plotting a line with gradient m on a computer screen with square pixels. You are told that for each x position on the screen you need to illuminate exactly one pixel in that column. The picture above should illustrate the sort of thing I mean. You need to define a function f such that for each x you illuminate the pixel (x,f(x)). The problem is that different people will choose f differently. Some people might make f(x)=⌊mx⌋. Others might choose f(x)=⌈mx⌉. Some might round mx to the nearest integer giving various choices for what to do when mx is half an odd integer. As the task has only been specified in terms of a gradient, some people might choose to plot an approximation to y=mx+c for various values of c. All of these people will plot different sets of pixels.



But...there are two things that can be said about all of these schemes:

  1. for a given m, two reasonable people's choice of the function f will have a bounded difference and
  2. if two different people are given different gradients then no matter what reasonable scheme they use there will come point, if you travel far enough along the x axis, the two different choices of f(x) will eventually become arbitrarily far apart.


In other words, given real numbers m, these crudely specified and rendered plots are a fine enough tool to distinguish between real numbers, but they're not so fine that they distinguish more than the real numbers.

And this intuition gives a nice way to define the real numbers. The idea is to define an almost homomorphism on the integers. f is an almost homomorphism if f(x+y)-f(x)-f(y) is bounded. Clearly the almost-homomorphisms form a group. Amongst the almost homomorphisms are the functions that are themselves bounded. It's not hard to see that according to the scheme I describe above these correspond to a gradient of zero.

The reals are simply the group of almost homomorphisms modulo the bounded functions. That's it!

For more details and proofs see the paper by Arthan. My line plotting example is a modern rendering (no pun intended) of De Morgan's colonnade and fence example.

I first learnt about this from the Wikipedia and the image above was borrowed from the article on the Bresenham line drawing algorithm.

Labels:

Wednesday, May 24, 2006

Exceptional Objects

One of my favourite topics in mathematics is the existence of exceptional objects. There have been many articles written on the web about this subject - particularly by John Baez. But I recently came across this article by John Stillwell, in American Mathematical Monthly, that also gives a historical perspective. The connection with classical geometry - Desargue's theorem and Pappus's hexagonal theorem - was also new to me.

Very briefly: mathematics is full of classification theorems. What frequently happens is that objects of a certain type are classified into a bunch of series with a handful of exceptional objects left over. For example the simple finite groups have all been classified and besides the series (such as the cyclic groups of prime order) there are 26 'sporadic' simple groups such as the Monster. The classification of compact Lie groups gives a similar pattern and even the classification of regular polyhedra is similar. Bizarrely, exceptional objects from different branches of mathematics are often related to each other. In fact, the entire classifications from different branches of mathematics are often closely related. And for some reason Dynkin diagrams and octonions seem to play a special role in many of these classifications. The article suggests that the octonions are the "mother of all exceptional objetcs". Anyway, read the article for more information.

I also found this article by the same author on the history of the 120-cell, one of those exceptional regular polyhedra.

Labels:

Tuesday, May 23, 2006

Quantum Mechanics and Probability Theory

I've been spending a bit of time thinking about quantum mechanics as an 'exotic' probability theory. It's actually very easy to view quantum mechanics this way and what interests me more is figuring out exactly how quantum mechanics differs from conventional probability theory.

Quantum mechanics looks just like probability theory with a slight twist. Instead of assigning probabilities to events you assign 'amplitudes' which are complex numbers. Computations with these amplitudes are carried out very similarly to computations with probabilitites. If we use a() to mean 'amplitude' a(A or B)=a(A)+a(B) and a(A and B)=a(A)a(B) under similar conditions to those in which P(A or B)=P(A)+P(B) and P(A and B)=P(A)P(B) in conventional probability theory. But these rules for a() only apply when the events A or B remain 'unobserved'. When we actually make observations we switch to a different rule where we now assign probabilities using the rule P(A)=|a(A)|² and these probabilities can be given your favourite probabilistic interpretation.

One interesting thing we can do is recast conventional probability theory so that it looks more like quantum mechanics. For example, consider coin tossing with two outcomes, H and T. We can use these states as the basis for a 2D vector space and write them as |H> and |T>. The outcome of a fair coin toss can be written as 0.5|H>+0.5|T>. When we come to 'observe' the coin its state vector 'collapses' to |H> or |T>. Suppose we have two such coins. Then the joint state space is the tensor product of the state spaces for individual coins. The rule for combining two independent coins into a joint state is the ordinary tensor product. So combining two fair coins gives the state (0.5|H>+0.5|T>)⊗(0.5|H>+0.5|T>)=0.25|H>|H>+0.25|H>|T>+0.25|T>|H>+0.25|T>|T> which can be interpreted as giving the usual probability assigments for fair coin tossing. If we couple the coins together on some way then the joint state might not be a product of individual coin states, ie. we might no longer have independence and their states would be 'entangled'.

Consider a physical process applied to the tossed coin. For example suppose we have a machine that always turns a head over to reveal a tail but only has a 50% chance of turning over a tail. We can think of this as a linear operator mapping |H>→|T> and |T>→0.5|H>+0.5|T>. In this formalism any physical process must be a linear operator given by a stochastic matrix.

Anyone who has studied a little quantum mechanics will recognise many of the phenomena above: the tensorial nature of joint systems, the linearity, the entanglement, the 'collapse' of the state on observation and so on as being nearly identical to features of quantum mechanics. So contrary to popular opinion I think I believe that whatever is interesting about quantum mechanics has nothing to do with any of these features. You can even shoe-horn a variant of the many worlds 'interpretation' of quantum mechanics into probability theory by insisting on the primacy of the state vector and pointing out that when a coin is observed the observation is described by a linear map defined by |H>→|H>|Observer sees H> and |H>→|T>|Observer sees T>.

But almost everyone agrees that quantum mechanics is weird. So how does it really differ from probability theory?

The biggest one is the feature called 'destructive interference'. In conventional probability theory we can combine two states together to make a new state. For example suppose process A generates state |A> and process B generates state |B>. Then we can choose to run either A or B with 50% probability resulting in a new state |C>=p|A>+(1-p)|B>. Suppose A and B are coin states. Then the probability of getting a head when observing state |C> is bounded between the probabilities of getting a head in states A and B. This is a kind of convexity condition. If processes A and B are both roughly fair there's no way of choosing p to 'refine' these states to make another one that's more unfair. And here's the big difference from quantum mechanics. In QM we can choose p to be negative (or any other complex number) and escape from this convexity condition. If we have a quantum coin toss, say |A>=0.6|H>+0.8|T> and |B>=0.8|H>+0.6|T> (in QM the sums of the squares of the moduli are one, not the sums themselves, that's why I chose 0.6 and 0.8), then we can combine them into |C>=-12/7|A>+16/7|B> and get a pure |H> state. We have made the |T> terms cancel out. This is an example of destructive interference. (How do we actually carry out this linear map for arbitrary complex numbers? It's actually very easy - see below.)

One very curious consequence of the above is that we can't ignore low amplitude outcomes. Classicaly, suppose |A>=sqrt(1-e)|H>+e|T> and |B>=sqrt(1-f)|H>+f|T>. If e and f are small enough then no matter what linear combination of |A> and |B> we use to make |C> we can choose to ignore the possibility that we have a head. But in QM, suppose |A>=sqrt(1-|e|²)|H>+e|T> and |B>=sqrt(1-|f|²)|H>+f|T>. Then be carefully crafting |C> from |A> and |B> we can make the probability of getting heads as high as we like.

It's destructive interference that underlies many of the interesting phenomena of QM. For example Shor's factorisation algorithm exploits destructive interference to remove those parts of the quantum computer state that give the wrong answer leaving us with an output that represents a factor.

Non-convexity also leads to other significant consequences. In conventional probability the basis elements |H> and |T> are 'special'. They lie at the extreme 'corners' of the space of possible states and can't be written as stochastic combinations of any other states. But for a quantum coin toss we've shown above that there is no special basis. The state |H> can be written as a linear combination of |A> and |B> just as easily as |A> can be written in terms of |H> and |T> and this is reflected in the fact that we can make a machine that combines |A> and |B> states to make an |H> state. In fact, this is straightforward to observe in the lab. Instead of considering |H> and |T> consider |+> and |-> which correspond to spin up or down states for an electron. Amazingly, if you rotate this state through 90 degrees to make a "spin left" state might get something like (|+>-i|->)/√2. (That answers the question I asked above, just rotate the system.) There is no conventional analogue of this. It is this that is the central problem with Schrödinger's cat. Many people think the problem is that the cat's state becomes something like (|alive>+|dead>)/√2 and can't figure out how to assign meaning to this. But this isn't an issue for QM at all, you can formalise conventional probability so that the cat is described by such a vector and nobody has any difficulty with that. The issue with the Cat is that when the cat is observed it appears to 'collapse' into either the state |dead> or |alive>. Why these two basis elements? In conventional probability these vectors are special. But in QM they are no more special than other vectors.

So in summary - I think that entanglement, wave function collapse and a whole host of other phenomena that are presented as specifically quantum phenomena are nothing but irrelevant distractions and aren't specific to QM at all. The interest value in QM comes from non-convexity and the important problem to solve is the preferred basis problem. (There's also the possibly unanswerable philosophical question of why in Heaven's name the universe looks like a stochastic system with complex numbers for probabilities? And if you're a Bayesian, in what way can a complex number possibly be interpreted as a degree of uncertainty?)

Labels:

Friday, May 19, 2006

Two Kinds of Mathematics

A recent comment by augustss made me realise that I mentally classify mathematics into two different kinds. I wonder if other mathematicians do the same - so here's your chance to post a comment saying whether or not you agree.

It seems to me that mathematical theorems are either structural theorems or content. By structural I mean that they set up some machinery and show that the machinery does what we expected it to do, and content refers to 'surprising' facts about the mathematical universe. (I think 'structural' is a good choice of word here. 'Content' doesn't feel quite right but the best I could come up with.)

For example, think about a subject like algebraic topology. We start by learning about point set topology. We might define the product topology and then from that show that the projections from the product space to the original space are continuous. These are structural results. They simply tell us that we've defined the product topology in the right way, and the reason why they are the right way is that someone wanted to do topology on the cartesian product of sets and thought hard until they found a way to make this work. They (probably) didn't make a discovery like - hey, there's this amazing topology on the cartesian product that gives continuous projections. They set out to make the product topology in order to prove other stuff, like the way a silversmith or a computer scientist might make tools for themselves for a particular application.

On the other hand, people later started discovering bizarre properties of homotopy groups of spheres. Consider the Hopf fibration which is a beautiful way of building the 3-sphere out of 1-spheres. This was a genuine discovery. It's not just a trivial consequence of the topological machinery but a new result that wasn't plugged in from the beginning. That's what I mean by content.

You invent mathematical machinery, prove structural theorems about it which shows that it does what you want, and then you start discovering interesting results with the machinery. Most branches of mathematics have a mixture of both types of result. In group theory you might prove structural results like the isomorphism theorems which really just show that quotients are what you think they should be. And then you start discovering content like the fact that An is simple for n≥5.

Structural theorems tend to be more universal and feel more 'invented'. Content theorems are more specific and feel more 'discovered'. And I'm really talking about a spectrum rather than a binary classification.

Most branches of mathematics have a mixture of the two types of theorem. Typically the structure theorems are used as a tool to discover content. In some sense the content is the end and the structure is the means. But category theory seems different in this regard - it seems to be mainly about structure. Every time I read category theory I see ever more ingenious and abstract tools but I don't really see what I think of as content. What's hard in category theory is getting your head around the definitions and often the theorems are trivial consequences of these. For example consider the Yoneda lemma as described here. This is a pretty challenging result for beginning category theorists. And yet these notes say: "once you have thoroughly understood the statement, you should find the proof straightforward". This exactly fits with what I'm getting at: the theorem is an immediate consequence of the definitions and the proof essentially shows that the machinery does what you'd expect it to do.

Computer science (at least the more formal parts) is a lot like structural mathemtics. It's about inventing machinery (lambdas, coalgebras, state machines) and then proving that machinery does what you expect (diamond property of reductions in lambda calculus etc.). If we compare with group theory the differences become apparent. A mathematician might consider groups in general (general machinery), specialisations of groups such as rings and fields (more machinery) and then specific groups like the Monster or the alternating groups (content). A computer scientist will look at things like untyped lambda calculus, untyped lambda calculus, lambda calculus with this or that extension, but they tend not to look at specific instances in the way that there are specific groups.

Very rarely there is a content theorem that follows from the structural theorems about the tools of computer science. Here's one. 7 binary trees are isomorphic to one. This is a really neat bit of content. This isn't a direct consequence of the definition of trees. When trees were invented nobody thought "what structure can I invent that captures the notion that 7 of them is isomorphic to just one?". This is a discovery, a new result that wasn't plugged in from the beginning.

I guess there is another way to view category theory. As it is the mathematics of structure then in a sense the structure of other branches of mathematics is the content of category theory. It's probably fair to say that one person's content may be another person's content. But it's not unusual for a lecturer to say of their course "the first few weeks will be spent setting up the definitions and machinery and then we'll get onto the real content..."

And now I can reply to augustss. "Real mathematics" is mathematics where from time to time you prove content. And the reason why I said "it's time to get back to some real mathematics" is that much as I've been enjoying computer science lately, I need to see some non-structural content from time to time.

Labels:

Tuesday, May 16, 2006

Monads Without Types

People who have just implemented interpreters and compilers are like people who've just had babies - they become monomaniacs incapable of talking about anything else, and showing you their latest pictures at any available opportunity. But I'm going to spare you the pictures. (Hmmm...does blogger.com support the classification of posts by type?)

My typeless pure functional lazy call-by-name language seems to be working out quite well so I thought I'd talk about two aspects of it:

Firstly: my goal has been to construct something as simple as possible that is as powerful as possible. I've basically got it down to just five instructions: push an integer onto the stack, push a pointer to a function, push a duplicate of the nth item down on the stack, push a closure and tidy up a stack frame. Those 5 instructions are then translated into C. Everything else is provided by libraries (eg. plus, minus, seq) or is part of the runtime (ie. garbage collector and the loop that decides which reduction to perform next). The core compiler is basically ten lines of code - the other 300 lines are mainly the parser that has to deal with the quirkiness of any Haskell-like syntax. Yet with this tiny compiler that compiles to just these five operations I've found it possible to write things like a monadic parser, implement monad transformers, and do simple logic programming using the list monad. I'm astonished by how far you can get with so little programming. The moral is, starting from next to nothing you get a lot more bang for your buck with functional programming. (Sometimes it even competes tolerably, performancewise, with Hugs).

Secondly: I thought I'd mention my handling of monads. The compiler only supports three types - integers, pairs and functions so we have no type inferencing tricks to generate the glue to make monads as easy to use as they are in Haskell. Nonetheless I eventually figured out a way to make monads almost as painless as in Haskell. There were two things I needed to do:

(A) Redefine the semantics of " {" and "}". In Haskell anything between "do {" and "}" is glued together using the bind function where bind is overloaded. What I did was similar. Anything between "{" and "}" is glued together using a function. But I don't use the bind function. Instead I define "{ <statements> }" to be a function that uses its argument to bind the statements together. It's then easy to define something like:

do f = f bind.

and we end up with code that looks like Haskell's again. Unfortunately this definition is only useful in the presence of polymorphism because it commits us to using the global 'bind' function. So the way to deal with this is to

(B) Reify monads. I simply define a monad to be the pair (bind:return) where bind and return are the bind and return funtions for the monad. (I use ':' to define pairs as in my language non-null lists are simply pairs constructed form the head and tail of the list.) I can then define do as

do monad f = f (head monad).
return monad = tail monad.

So now the price I have to pay for the lack of types is just that I have to parameterise uses of do and return with the monad. This isn't too onerous, for example you get code looking like:

putLine l = if l then
doIO {
putChar (head l);
putLine (tail l)
}
else
returnIO 0.

where I've defined doIO = do IO etc.

It gets a little more complex with monad transformers. As monads are reified, a monad transformer in my language is an actual function that maps a monad to a monad. For example, here's part of the definition of the StateT monad transformer:

returnStateT monad x s = return monad (x:s).
bindStateT monad m k s = bind monad (m s) (\as -> k (head as) (tail as)).
doStateT monad f = f (bindStateT monad).
-- Actually construct the transformed monad.
StateT monad = bindStateT monad : returnStateT monad.

In some ways these definitions are actually simpler than the corresponding Haskell definitions. In Haskell you communicate information to the compiler about which bind and return to use by your choice of type. So you find yourself repeatedly wrapping and unwrapping objects (eg. using runStateT). No such issue here, a sequence of statements in the state monad can simply be applied immediately to the input state.

Anyway, the monad transformer examples I came up with earlier now carry over to my new language like so:

prog = do (StateT State) {
liftStateT State (modify (plus 1));
a <- liftStateT State get;
modifyT State (plus 10);
b <- getT State;
liftStateT State $ put (a+b);
c <- getT State;
return (StateT State) $ c+1
}.

Note that I chose to make everything explicit. I could easily have made local definitions like lift = liftStateT State to make things look more like Haskell.

Anyway, I'm pretty pleased. With a few hours of coding here and there over a period of two weeks I've written my first compiler of standalone executables in 20 years and it's already at the stage where it wouldn't be hard to port its own compiler to itself. (My previous compiler was an adventure game DSL written in BASIC and based on the one here (though I don't think the acronym 'DSL' existed then). I also implemented Forth on a Psion II a bit later but it's debatable whether you'd call indirect threading by the name 'compilation'.)

I'll probably put a release on my web site some time in the next week. Not that it's actually useful for anything...

And all this computer science is too much. I really must get back to some real mathematics some time soon...

Wednesday, May 10, 2006

Writing a pure functional lazy call-by-name compiler

There are lots of minimal functional language/lambda expression evaluators out there, so instead I thought I'd write one that actually compiled to standalone C. But by accident it seems to have grown and I now have a 'typeless' Haskell, or at least a language with ML type grammar, lazy evaluation and call-by-name, primitive monad support and only two types - integers and lists. I'm now at the stage where I can compile pieces of code like this:

return a s = a : s.
bind x f s = let vs = x s in seq vs ((f (head vs) (tail vs))).
bind' x f s = bind x (\a b -> f b) s.

contains x = if x
(if (head x==81) 1 (contains (tail x)))
0.

getLine = do {
c <- getChar;
if (c==10)
(return 0)
do {
l <- getLine;
return (c : l)
}

}.

putLine l = if l
do {
putChar (head l);
putLine (tail l)
}
(return 0).

prog = do {
a <- getLine;
if (contains a) ( do { putLine a; putChar 10 }) (return 0);
prog
}.

start = prog 666.

This piece of code does the same as "grep Q" and amazingly doesn't leak memory and could probably outrun a snail (just). When I tidy it up and fix the bugs I'll release the compiler on my web page.


It's been one hell of a learning experience. The main thing I've learnt is that those compiler writers are damn clever people. But I also learnt other things like: even the most innocent looking expressions can gobble up vast amounts of memory in a lazy language, it's really heard to debug lazy programs even when you have complete freedom to make your compiler emit whatever 'instrumentation' you want, garbage collection is more subtle than I thought and Haskell has a weird grammar. On the other hand, some things were easier than I expected. Dealing with lambda expressions looks hard at first - but lambda lifting is very easy to implement so that once you have a way of compiling equations, lambdas come for free. And ultimately it's incredible how much stuff you can get for nothing. The core compiler is miniscule - it knows how to compile one operation - function application. The rest is syntactic sugar or part of the library that the C code links to.


My ultimate goal is to get my compiler to emit self-contained C programs that are as small as possible. I'd like to take these programs and make them run on a microcontroller like the one in this robot I built last year. It has 1K of RAM, 8K of program space and runs at 8MHz, which may be enough for a simple light following algorithm. (In assembler it'd be about 100 bytes long and use 2 or 3 bytes of RAM.) I don't want to actually achieve anything useful - I'm too much of a mathematician to want to do that. I just want to construct an example of what I see as an extreme form of computational perversity: a pure language with no side effects having actual side effects in the physical world.

Thursday, May 04, 2006

Grok Haskell Monad Transformers


Another literate Haskell post:

I've tried a few times to read various documents on the web about
Monad Transformers in Haskell. I think that in almost every case the
authors are trying to show how clever they are rather than explaining
their use. If I had just seen the simplest possible examples of Monad
Transformers at work I would have been able to figure out what was going
on and that would have given me enough information to bootstrap myself
into Monad Transforming enlightenment.

So to save other people the trouble I went through I'm providing you
with examples of Monad Transformers at work. I'm not even going to
explain in detail how they work, they're close to self-evident once the
main features are pointed out.

> import Control.Monad.State
> import Control.Monad.Identity

Firstly here are two examples of the use of the State Monad. This
code isn't intended as a tutorial on the State Monad so I won't
explain how they work.

> test1 = do
> a <- get
> modify (+1)
> b <- get
> return (a,b)

> test2 = do
> a <- get
> modify (++"1")
> b <- get
> return (a,b)

> go1 = evalState test1 0
> go2 = evalState test2 "0"

Note how evalState 'unwraps' the State Monad and gives you back the
answer at the end.

So the question is: how do I combine both of these states into one
so that I can update or read either the integer or the string state
at will? I could cheat and make a new state of type (Int,String)
but that isn't a general solution.

The idea is that you use a Monad Transformer. A Monad Transformer
if like a layer of onion peel. You start with the Identity monad
and then use Monad Transformers to add layers of functionality.
So to get a two-state monad you take the Identity and add two
layers of stateness to it. To get the answer at the end you need
to unwrap the State layers and then unwrap the Identity layer too.

When you're inside your 'do' expression you need a way to choose
which Monad you're talking to. Ordinary Monad commands will
talk to the outermost layer and you can use 'lift' to send your
commands inwards by one layer.

The following should now be self-evident:

> test3 = do
> modify (+ 1)
> lift $ modify (++ "1")
> a <- get
> b <- lift get
> return (a,b)

> go3 = runIdentity $ evalStateT (evalStateT test3 0) "0"

Note that we use evalStateT to unwrap the State layer instead
of evalState. evalStateT is what you use to unwrap one layer
of the Monad. evalState is what you use when your entire Monad
is just the State monad. When State is a layer, rather than
the whole thing, it's called StateT. (Try :type go3 in ghci.)

What if you want to combine IO and State? For various reasons
the IO monad must form the innermost core of your onion so
there's no need to wrap an IO layer around the Identity,
you can just start with IO. And when you unwrap you don't
need to unwrap the IO layer because the Haskell compiler does
that for you. (Now you can see why you can't layer IO, you're
not supposed to unwrap IO as it's the job of the code that
calls main at the top level to do that.)

So here we go:

> test5 = do
> modify (+ 1)
> a <- get
> lift (print a)
> modify (+ 1)
> b <- get
> lift (print b)

> go5 = evalStateT test5 0

At this point you might be suspicious that IO is being handled
slightly differently from State. So here's a StateT layer
wrapped around State:

> test7 = do
> modify (+ 1)
> lift $ modify (++ "1")
> a <- get
> b <- lift get
> return (a,b)

> go7 = evalState (evalStateT test7 0) "0"

You can probably safely ignore my comments and crib the code above
directly for your own use.

And credit where credit's due: I found the following link to be
more helpful than any other in trying to figure this stuff out
http://www.cs.nott.ac.uk/~nhn/MGS2006/LectureNotes/lecture03-9up.pdf
(But I still felt that this link was obfuscating.)

You may now be wondering where test4 and test6 are. So am I.

Update: I just figured out that you don't always need to use the
'lift' operation. Some of the Monad Transformers have been defined
so that the commands will automatically be passed into the inner
Monad (and further) if needed. I think it's slightly confusing to
write code this way, however.

Monday, May 01, 2006

The Most Amazing Development Environment Ever

The latest version of Graham Nelson's Inform compiler for writing interactive fiction (aka text adventure games). By strange coincidence I was just getting back into Trinity (almost finished it) and was toying with playing with interactive fiction myself. This environment seems like a real pleasure to use and to get an idea of why this is so amazing, here is some actual compilable code:

The Dining Car is a room. Lord Peter is a man in the Dining Car. Sherlock Holmes is a man in the Dining Car. Miss Marple is a woman in the Dining Car. Adam Dalgliesh is a man in the Dining Car.

Suspecting relates various people to one person.

The verb to suspect (he suspects, they suspect, he suspected, it is suspected) implies the suspecting relation.

Dalgliesh suspects Holmes. Holmes suspects Lord Peter. Lord Peter suspects Holmes. Miss Marple suspects the player.

Exculpating relates one thing to various people.

The verb to exculpate (it exculpates, they exculpate, it exculpated, he is exculpated) implies the exculpating relation.

The silver bullet exculpates the player. The pipe ash exculpates Holmes. The poison pen letter exculpates Lord Peter. The poison pen letter exculpates Miss Marple. [Poor Dalgliesh. I guess he did it.]

The pipe ash, the letter and the silver bullet are carried.

and here is some more

The Oval Office is a room. Josh and Toby are men in the Oval. A height is a kind of value. 5 foot 11 specifies a height. A person has a height. Josh is 5 foot 8. Toby is 5 foot 10.

Height guessing is an action applying to one thing and one height. Understand "guess [someone] is [height]" as height guessing.

Check height guessing: if the noun is not a person, say "You can only guess the height of people." instead. Carry out height guessing: if the height of the noun is the height understood, say "Spot on!"; if the height of the noun is greater than the height understood, say "No, [the noun] is taller than that."; if the height of the noun is less than the height understood, say "No, [the noun] is shorter than that."

Test me with "guess josh is 6 foot 3 / guess josh is 5 foot 9 / guess josh is 5 foot 3 / guess josh is 5 foot 8".

As one might expect from Graham Nelson, it is fully (dare I say, literarily) documented and the error messages are the most verbose (as in useful and readable) error messages I have ever seen from any compiler.