Sunday, April 29, 2007

Homeland Security Threat Level Monad

I've previously introduced the idea of a monad being a kind of one way wrapper. The Monad interface allows you to wrap things, but not unwrap them again. And it allows you to apply any function you like to a wrapped object, as long as it stays wrapped. I also indicated how this could be used to model a notion of taint. If you have some data from a tainted source - for example it's unreliable, then the trivial monad can do a nice job of keeping track of all other data that has been tainted by contact with this data. What I also want to show is that a wide variety of other monads can be interpreted in terms of taint. What this means is that despite its simplicity, the trivial monad is always worth keeping in the back of your mind because it is a protoype for many other monads.

You could imagine using this taint idea as a security model. Any data of type W a is considered secret and a threat to public safety if it is revealed. By sticking to the Monad interface we force the compiler to make sure that any data depending on secret data is itself marked secret. This makes it easy to keep secret data sequestered - at least as long as people only use the Monad interface to W.

But suppose this isn't good enough and we need to track not only which data is secret, but also the threat level to public safety should a secret be revealed. With this in mind, define the Homeland Security Threat Level Monoid:


> data Threat = LOW | GUARDED | ELEVATED | HIGH | SEVERE
> deriving (Eq,Ord,Enum,Show)


It's a monoid because it has an associative binary operator max and has an identity LOW. For example max LOW ELEVATED = ELEVATED.

We want to tag some of our data with a threat level. In addition, we want to ensure that if we combine two tagged pieces of data, x and y, the combination must be tagged with the worse of the two threat levels. That's why we have the monoid structure that allows us to combine two threats.

So define

> data T a = T Threat a deriving Show


But to use this it appears we have to thread use of the max function all the way through our code. It'll be a nightmare to implement. But there's a solution. What we want is a monad!

Consider LOW to be the default threat level of all data and that tells us how to define return.

Now suppose we have a function f :: a -> T b and some data x :: T a. We want to be able to apply f to x using the monad interface: x >>= f. The threat level of the result should be the max of the two possible sources of threat: x itself or the result of computing f. With that in mind:


> instance Monad T where
> return x = T LOW x
> (T t x) >>= f = let T t' x' = f x in T (max t t') x'


Notice how we've now shown how to implement threat combination in a way that can be used by any code that uses the monad interface. So now we can take last week's definition of h and use it:



> g x y = y >>= (return . (+x))
> h x y = x >>= (\x -> g x y)



Try evaluating h (T ELEVATED 3) (T GUARDED 6). It just works, even though g and h were originally written for the trivial monad! So it's trivial to make code that's written monadically work with our new threat level system. This should give some idea of the power of monads. We've completely changed the sematics of our monadic addition operator, h, without lifting a finger to edit it.

But notice that I didn't need to talk about threats at all. I've shown how to tag data with other data that's implicitly carried along with it. All we needed was a rule for combining pairs of tags. Our tags were of type Threat. Instead we could have used strings and instead of max we could have used ++. If we had, then we'd have a way of implicitly carrying around logs with our computations. In fact, all I have done here is implement a special case of the Writer monad. But I still think that thinking in terms of data that is quarantined in some way helps with the intuition.

I hope that by starting with the trivial monad last week, and upgrading in a natural way to the Homeland Security Threat Level monad, I've given an easy way to think about monads. Our ordinary intuitions about taint, for example that if a clean object touches a tainted one it becomes tainted itself, carry over to monads. And the Writer monad isn't the only one that can be thought of in these terms. I hope at some point soon to look at the Maybe and list monads in this way too.

Labels: ,

Saturday, April 28, 2007

The Trivial Monad

Update: I've added some solutions to exercises below.

Haskell monads form a type class. And Haskell type classes are essentially interfaces shared by some types. So the Monad type class is a common API for talking to a bunch of different types. So the question is this: what's so special about this API?

One way to grasp an API is to see concrete examples. Usually the monad API is introduced through containers or IO. But I think that the prototypical monad through which all others can be understood is much simpler - it's the trivial monad.

APIs often capture design patterns, and the design pattern here is a one-way wrapper. We can wrap things but we can't unwrap them. We still want to be able to whatever we like to the wrapped data, but anything that depends on anything wrapped should itself be wrapped.

Without further ado, here's a wrapper type:


data W a = W a deriving Show


(The Show bit is just so we can play with these things interactively.)

Note how it doesn't add anything except some wrapping. And the first thing we need is to be able to wrap anything we like. We can do that easily with this function:


return :: a -> W a
return x = W x


(We could have just used W instead of return. But I'm heading towards a common API that will work with other types too, so it obviously can't be called W.)

And now we need one more thing - a way to manipulate wrapped data leaving it wrapped. There's an obvious idea. Given any function a -> b we write a function that converts it to a function W a -> W b. This is guaranteed to keep things under wraps. So here goes


fmap :: (a -> b) -> (W a -> W b)
fmap f (W x) = W (f x)
.

It seems that we've finished. For example we can wrap a number and then increment it:


a = W 1
b = fmap (+1) a


But here's something we can't do. Define a function f like this:


f :: Int -> W Int
f x = W (x+1)


It increments and returns a wrapped result. Now suppose we want to apply the underlying operation here twice, ie. we'd like to increment a number twice and return a wrapped result. We can't simply apply f twice because (1) it'll doubly wrap our result and (2) it'll attempt to add 1 to a wrapped result, something that doesn't make sense. fmap doesn't do what we want either. Try it in an interactive Haskell session. It seems we need a way to apply f, unwrap the result and apply f again. But we've already said that we don't want to allow people to unwrap these things. We we need to provide a higher order function that does the unwrapping and application for us. As long as this function always gives us back something that is wrapped, our end users will never be able to unwrap anything. Here's an idea for such a function:


bind :: (a -> W b) -> (W a -> W b)
bind f (W x) = f x


Notice how it's very similar to fmap but is even simpler. So now we can try doubly, or triply, incrementing


c = bind f (f 1)
d = bind f (bind f (f 1))


Notice how bind is more general than fmap. In fact, fmap f = bind (return . f).

And that's it. Using return and bind we have achieved our goal of wrapping objects and freely manipulating wrapped objects while keeping them wrapped. What's more, we can chain functions that wrap without getting bogged down in multiple layers of wrapping. And that, really, sums up what a Haskell monad is all about.

So here are a couple of exercises:

(1) define a function g :: Int -> W Int -> W Int so that g x (W y) = W (x+y). Obviously that definition won't do - the left hand side has a W y pattern so it's actually unwrapping. Rewrite this function so that the only unwrapping that happens is carried out by bind.

(2) define a function h :: W Int -> W Int -> W Int so that h (W x) (W y) = W (x+y). Again, no unwrapping.

I'm hoping that after you've done these exercises you'll see how you can still work freely with data even though it's wrapped.

In Haskell, it is more usual to use the operator >>= instead of bind where bind f x = x >>= f.

So the last question is this: why would you ever wrap data like this? In practice people tend not to use the trivial monad very much. Nonetheless, you can see how it might be used to represent tainted data. Wrapped data is considered tainted. Our API never lets us forget when data is tainted and yet it still allows us to do what we like with it. Any time we try to do anything with tainted data the result is also tainted, exactly as we might expect. What I find interesting is that almost every monad, including IO, lists and even probability, can be thought of quite naturally as variations on taint. I hope to say more about this in the near future.

Anyway, code above fails to compile because of some namespace clashes. Here's a complete definition of W that really works. Note also that fmap, which we showed we don't really need, allows us to make W an instance of Functor too:


> data W x = W x deriving Show

> instance Functor W where
> fmap f (W x) = W (f x)

> instance Monad W where
> return x = W x
> W x >>= f = f x



Exercise 3: Prove the three monad laws for W. This should be almost trivial.

Exercise 4: We can't completely unwrap things using the monad API. But we can unwrap one layer from things that are wrapped twice. So here's a nice puzzle: define a function join :: W (W a) -> W a using the Monad API and no explicit unwrapping.

In conclusion, most monad tutorials show what monads are, and how to use them. I hope I've given some additional insight into why the Monad interface consists precisely of the two functions it does. That insight should become clearer when I say more about taint.

PS I've been a bit busy lately with planning for kitchen remodelling, lots of work and a new cat(!). I probably won't be posting very frequently for a few months.




Solutions to Exercises


Some people have found the exercises tricky so I'm posting some solutions. I think it's a good sign that they're tricky, it means that there is some non-trivial stuff to be learnt from playing with this monad. It ought to figure more strongly in tutorials. In a sense it allows people to learn about monadicity in its purest form.

Anyway, enough of that.

Exercise 1: We want g :: Int -> W Int -> W Int so that g x (W y) = W (x+y). Start the definition like this

g x y = ...

We want to apply (+x) to y. But y is wrapped so we must use >>= to apply it. But (+1) doesn't have the right type to be applied with >>=, it's an Int -> Int not an Int -> W Int. So we must compose with return and we get

g x y = y >>= (return . (+x))


Exercise 2: We want h :: W Int -> W Int -> W Int so that h (W x) (W y) = W (x+y). But g is already fairly close. So again write

h x y = ...

The difference from g is that x is wrapped. So we want to apply \x -> g x y to our wrapped value. This function is already of type Int -> W Int. So we can just write

h x y = x >>= (\x -> g x y)

or just h x y = x >>= flip g y.

Exercise 3: I'll just do the last one: (m >>= f) >>= g = m >>= (\x -> f x >>= g)


m is of the form W x for some x:


(W x >>= f) >>= g
= (f x) >>= g

W x >>= (\x -> f x >>= g)
= (\x -> f x >>= g) x
= f x >>= g


So the LHS equals the RHS.

Incidentally, you can get a long way with these problems by not thinking about what's going on! Instead, just write out the type signatures of all of the functions involved and try to stick them together like a jigsaw puzzle so that the final result has the right type. At each stage the arguments of each function must match the signature of the function so it really is like fitting the shapes of jigsaw pieces together. This isn't a foolproof strategy, but it often works.

Anyway, feel free to ask questions if you're still stuck. Some of your questions may be answered here.

Labels: ,

Tuesday, April 10, 2007

The curious rotational memory of the Electron, Part 2

A spinning top keeps going because it has angular momentum, and the angular momentum arises because the top is made up of mass that is distributed around the axis of rotation. The angular momentum is represented by a vector (well, a pseudovector really) directed along the axis of spin (at least when it's spinning around the top's axis). Similarly electrons have angular momentum. It's tempting to think of electrons as like microscopic spinning tops but the angular momentum of electrons isn't due to an extended mass rotating about some axis. In fact, as far as we know, electrons have no extension. Instead their angular momentum is a peculiar kind of intrinsic angular momentum that has a lot in common with ordinary angular momentum - eg. it's conserved and it's interconvertible with ordinary angular momentum. Because of it's not being associated with a mass distribution it's given the special name of spin, but as long as you bear in mind these provisos it's probably fair to use at least some intuitions about spinning tops when thinking about electrons.

But it's a little more complex than this. We also have quantum mechanics to contend with. The spin of an electron is a vector. But we find that when we measure one of the components of this vector this value is quantised and can only take values +hbar/2 and -hbar/2, where hbar is Planck's constant. We choose units where h-bar is 1 so the z-component of the spin is always measured to be +1/2 or -1/2. If we write these two states as |+> and |-> then because we are dealing with quantum mechanics, the z-component of the spin can be represented by the linear combination a|+>+b|->. This corresponds to a state in which there is a probability |a|² of measuring +1/2 and a probability |b|² of measuring -1/2. This is what might have been written as a.*return (1/2)+b.*return (-1/2) in my earlier Haskell code. But that's just one component of the spin. What about the x- and y-components? Amazingly the state a|+>+b|-> tells us everything we can possibly know about the spin of an electron and we'll call it a spin state.

Suppose we have an electron in the state ψ = a|+>+b|->. What happens if we measure the y-component of its spin? One way to answer that question is to rotate the electron through π/2 so that its x-axis is rotated to align with the z-axis and then measure the z-component of its spin. In order to to that we need to know how to rotate spin states. The rule for rotation through θ about the x-axis is this (in a suitable coordinate frame):

|+> → cos(θ/2)|+>-sin(θ/2)|->
|-> → sin(θ/2)|+>+cos(θ/2)|->

Note how choosing θ=0 gives the identity, as expected. Note also that θ=π maps a|+>+b|-> to b|+>-a|-> so that the probabilities of measuring +1/2 and -1/2 are simply swapped, exactly what you'd expect for turning a state upside down. But there's something else that you should notice - there's an ambiguity. A rotation through 2π should give the same as a rotation through 0 and yet setting θ=2π in that transformation maps a state ψ to -ψ. Now |a|² = |-a|² so the probability of observing spin up or spin down is unaffected. But as I've been showing over previous posts, flipping a sign in a state can make a big difference as soon as you start performing interference experiments. The same goes for any angle: if I rotate through π should I use θ=π or θ = 3π? So can the transformation I've given make sense?

The transformation does make sense if you consider that in any physical process that rotates an electron the transformation will evolve continuously over time. Electrons don't just instantly rotate. In other words, if a rotation is applied to an electron then it will follow a path in SO(3), not just be an instantaneous application of an element of SO(3). And that allows us to resolve the ambiguity: the rotations of electrons are described by the double cover of SO(3) known as SU(2). So a rotation through 360 degrees doesn't return you to the identity although a 720 degree rotation does. The transformation I gave above is completely unambiguous if you continuously rotate an electron around the x-axis tracking a continuous value of θ, after all, the double cover is basically just the set of continuous paths from the identitiy in SO(3) (with homotopic paths considered equivalent).

And that's the bizarre fact: electron rotations aren't described by SO(3), they're described by SU(2). In particular, rotating an electron through 360 degrees does not return it to its original state, but a rotation through 720 degrees does! In a sense, like Dirac's belt, electrons can remember something about the path they took to get where they are, in particular they remember how many twists there were in the path.

What does this mean experimentally? the first thing to note is that this is true not just for electrons but any spin-1/2 fermion. This included protons and neutrons. The stuff I've been talking about manifests itself in a number of ways. In particular, the spin of a particle affects how a magnetic field acts on it. For example, spin-up and spin-down particles can be separated into distinct beams using Stern-Gerlach apparatus. Also, the spin of particles precesses in a magnetic field and this is used on a regular basis in NMR. These two facts allow us to easily manipulate and measure the spin of fermions. In other words, the fact that fermions remember how many twists there are in their rotations isn't just some esoteric nonsense, it's now engineering and the theory is tested repeatedly all over the world.

Every familiar object is invariant under rotations through 360 degrees. So the fact that electrons need to be rotated through 720 degrees to return them to their original state seems like one of the most bizarre facts about the universe I know of. And yet many books that introduce spin just slip in this fact in a routine way as if it were no different to any other.

The fact that the biggest connected cover of SO(3) is the double cover puts a big constraint on the kinds of weird effects like this can happen. We can have a 360 degree rotation multiply by -1, but not by i, because a 720 degree rotation absolutely has to return us to where we started from. But suppose the universe were 2-dimensional. If you remember what I said about SO(2) you may notice that no such constraints apply because SO(2) has an infinite cover. There is a group in which all of the rotations through 360n degrees are distinct for distinct n. This means that a physical system could have its state multiplied by any factor (of modulus 1) when rotated through 360 degrees. Particle that behave this way are called anyons. But we live in a 3D universe so we don't expect any fundamental particles to have this property. However, in quantum mechanics any kind of 'excitation' of a physical system is quantised and can be thought of as a type of particle. These are known as quasiparticles. For example, just as light is made of photons, sound is also quantised as phonons. In the right kind of solid state medium, especially those that arise from some kind of 2D lattice, it seems quite plausible that anyons might arise. This gives rise to the so called fractional quantum hall effect. Anyons might one day play an important role in quantum computing via topological quantum computation.

Labels: ,