Friday, July 28, 2006

Using markets to aggregate information

If you have a wide variety of sources of probabilistic information how do you aggregate it together? From the Microbe World podcast (if you're wondering - yes, I do listen to a regular podcast about microbes!) I learnt that people who need to predict outbreaks of flu have taken an interesting approach to this problem.

Researchers at the University of Iowa have set up an Electronic Market to allow experts in fields to trade on their beliefs. By observing how much people are prepared to bet on various outcomes we can get an aggregated expert opinion of how likely these outcomes are. Apparently they perform fairly well as predictors of the future. And one of these markets is the Influenza Prediction Market where you can buy stock in your favourite influenza strain.

On a related note, the Foresight Exchange has been running for many years now. Click on "claims" and select "Science & Technology:Math" to bet on the likelihood of the Collatz Conjecture or the Riemann Hypothesis.

Labels:

Monday, July 24, 2006

"I love the Hahn-Banach Theorem"

No, not me. I'm just quoting:

I love the Hahn-Banach theorem. I love it the way I love Casablanca and the Fontana di Trevi. It is something not so much to be read as fondled.

Thus begins the abstract to the eulogy "On the Hahn-Banach Theorem" by Lawrence Narici.

There's not much I can add to that. Have you ever felt this way about a theorem?

(I have a virulent dislike of analysis but I'm going to see if this paper can convert me.)

Labels:

Thursday, July 20, 2006

What is a Saros?

Until recently I though that solar eclipse prediction involved quite a bit of celestial mechanics. Not so, it turns out you can get a pretty good handle on when eclipses are likely to occur just by considering the periods of various cycles and using a simple continued fraction approximation.

At any instant, the orbit of the Earth around the Sun lies in a plane. Clearly the Moon must lie in this plane for an eclipse to take place.

The orbit of the Moon around the Earth also lies in a plane. A different one, in fact. The intersection of these planes forms a line. The two points where this line meets the orbit of the Moon are known as nodes. In order to have a solar eclipse the Moon clearly must lie near a node.

As the moon orbits it travels from a node back to the same node in 27.21222 days (a draconic month).

A solar eclipse can only take place at a new moon. New moons take place every 29.53059 days (a synodic month).

Solar eclipses will take place each time the cycles of a half-draconic month (there are two nodes) and a synodic month coincide.

The ratio of the two periods is approximately 2.170391. So we can get an idea of when the cycles will coincide by constructing rational approximations to this ratio. We can use continued fractions to form them and one of these is 484/223. 242 synodic months is equal to 223 draconic months to within an hour and is equal to 6585 and a third days (approximately 18 years). This time interval is known as a saros.

So, if a solar eclipse has just taken place, then to a good approximation, we can expect another eclipse exactly one saros later. What's more, because a saros is 1/3 day, modulo a day, we know that the location of the subsequent eclipse will be 120 degrees longitude west. (Note that solar eclipses will happen more often than once a saros, but eclipses separated by a saros are interesting because they form a regular sequence, to a good approximation.)

But every rational approximation to the ratio given above will give some kind of approximate eclipse cycle, so why focus on the saros? The saros has another interesting property. The Moon's orbit is elliptical. This allows us to define another cycle: the time taken to move from major axis, to minor axis, to major axis again. This is the time period over which the Moon-Earth distance is periodic and is known as an anomalistic month. It turns out that that one saros is almost exactly 239 anomalistic months. When the Moon is closest to the Earth it looks bigger than the Sun and when it is furthest it looks smaller. This makes the difference between a total eclipse, where the Moon completely occludes the Sun, and an annular eclipse, where an annulus of the Sun is visible. Because a saros is close to an integer multiple of this period, solar eclipses separated by a saros are likely to be of the same type.

According to wikipedia this time period is named after a Babylonian word because the Babylonians were aware of this cycle. I'll believe that when someone tells me how the Babylonians knew that the eclipses were taking place one third of the way across the world.

Armed with this knowledge I must read how Stonehenge can be used to predict eclipses.

Friday, July 14, 2006

Do Particles Exist?

In quantum mechanics, the state of a particle is given by an element of a vector space - typically something like a Hilbert space. But what happens if you want to investigate multiple particles and the number of particles may change over time? Then you need to use quantum field theory. Suppose we're dealing with one type of particle. Then the state space now looks like

V=V0⊕V1⊕V2

Where vectors in Vn describe states with n particles.

Now, here I'm going to start getting out of my depth. But I'm sure people out there can correct my errors. And I'm treading dangerously by rephrasing things a little differently from how they appear in any of the books or papers I have read.

Each of these Vns carries a representation of the Poincaré group. This means that if we apply a translation, rotation or boost to a vector in Vn we get another state Vn. So rotating, boosting or translating an n particle state just gives you another n particle state. The upshot of this is that all inertial observers can agree on how many particles a state represents.

But suppose now that we accelerate our state. We map our underlying spacetime with a function f so that if p(t) is the worldline of an inertial observer, f(p(t)) is now the worldline of an observer accelerating with constant acceleration, say a. f induces a linear mapping on V. The details of the computation are a bit messy but essentially what happens is that an n-particle state is now mapped to a state that is a linear combination of elements from all of the Vi. In particular, elements of V0 end up mapping to states with particles. Let me quote Wikipedia

the very notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles

These particles are called Unruh radiation. But I find this notion bizarre. How can the number of particles depend on the observer? And how does this look in practice? Well suppose we're in a vacuum and someone called Fred accelerates past with a particle detector. Fred's detector will start beeping to indicate the presence of particles even though I don't get any beeps with my detector. I will in fact see Fred's detector act like it's detecting particles. In other words, although Fred and I might disagree over how many particles occupy this region of space, we can both agree on what the detector is doing. This isn't a big deal at all, we're used to the idea of non-inertial instruments acting funny, just trying using a pair of scales in an accelerating car.

So here's my conclusion from all of this: either

1. It makes no sense to interpret the detection of particles by the accelerating detector as indicating that the vacuum contains particles. We can't trust the readings from a non-inertial particle detector without some kind of correction for acceleration. This is completely familiar, lots of other kinds of instruments fail when non-inertial. The actual definition of the number of particles in a region of space is chosen so as to correspond with what an inertial detector sees.
2. The notion of particle, separate from that of a detector, is meaningless. We just have a Hilbert space of states and instruments that go beep under certain circumstances and we can predict when these instruments go beep by looking at properties of the Hilbert space.

Most physicsts seem to use another option:

1. The number of particles depends on the frame of reference in which you are measuring

Anyway, things get even trickier. If you look at how V is split up into n-particle subspaces it turns out that the definition of this splitting depends on a choice of direction for time. In a flat spacetime we can pick any timelike direction because they're all basically related by Lorentz transforms so they all give the same results. But in a curved spacetime it's not so easy. Again we end up with an ambiguity in the number of particles, but this time (at least near a black hole) it's called Hawking radiation. I interpret this as meaning we have to take option 2. Particles simply aren't a well-defined concept, except as an approximation, in a curved spacetime. However, most physicsts still take option 3. I'm happy with option 2 because I see quantum mechanics as being primarily about vectors in a state space, not about particles. Physicists seem happy to take option 3 even though I think it's nonsensical. They're used to the idea of a quantity that is frame-dependentm eg. the x-coordinate of a vector, and feel that it's fine to extend this notion to integer valued properties such as particle number.

But what do I know? I'm not an expert in this field. I did try to study this field properly many years ago, eg. by reading Wald's book on black hole thermodynamics. I found it to be mostly clear, but I had many problems because I kept having objections to the physics, something I hadn't felt with any of the physics I had studied previously, including wacky stuff like renormalization.

Anyway, I don't have anything to contribute to this subject, I just thought I'd mention it because people might find it interesting. I discussed it a little bit with someone who knows a lot more about this subject over at Reality Conditions and I now have a bunch of papers to read on the subject. Unfortunately the setting for much of this work is C*-algebras and the like so I need to swot up on all that stuff first.

Labels:

Wednesday, July 12, 2006

Iannis Xenakis and Formalized Music

John Baez recently sparked quite a bit of discussion of music and mathematics over on sci.physics.research. But nobody seems to have mentioned the composer that many see as the most rigorously mathematical of all: Iannis Xenakis. Born in Romania with Greek ancestry and eventually adopted by the French, Xenakis started his career as a civil engineer and architect and only later turned to music.

I have listened to three CDs by Xenakis: Music for Strings, Persepolis and Legende D'Eer. Out of these, Music for Strings is a collection of pieces that come closest to the usual notion of music in that notes are played at various pitches on conventional instruments. Probably the most distinctive features are the wild glissandi flying in all directions (to use a spatial metaphor). The other two discs consist of hour long pieces that sound superficially like unpleasant extended accidents in a junkyard.

I decided to try to find out in what way these works were mathematical. After much searching on the web I found a paper by Edward Childs describing part of Xenakis's stochastic composition process. Apparently Xenakis made explicit use of four probability distributions in the composition of a piece call Achorripsis. I'll concentrate on three of these as I believe there is a way to drastically simplify Childs's description of these while combining them into one single scheme.

Xenakis composed his piece by creating a grid of 28 columns and 7 rows. Each row represents a group of instruments and each column represents a time period. Xenakis created a number of musical events and stochastically assigned these to cells in the grid. Within each grid cell he also chose the pitch of each event and the timing between events using stochastic methods. In particular, he generated the number of events in a cell using a Poisson distribution, the timing between events using an exponential distribution and the pitch of events using a uniform distribution. This composition dates from 1957 and this prompted Childs to say

The preparation of the score was a remarkable feat considering
that he worked without the help of a computer, but
calculated all distributions, and their musical implementation,
by hand.

Now, suppose that Xenakis had taken his grid and pinned it up on the wall. If he then stood some distance from the grid, blindfolded (actually, Xenakis would only have needed an eye patch), and had thrown darts at the grid, what distributions would we see? If he was far enough away and blindfolded there'd be unlikely to be any kind of bias towards one part of the grid or another meaning that the darts that hit the grid would be uniformly distributed. The number falling in each cell would have a Poission distribution. The spacing between successive pairs of darts along a horizontal axis would have an exponential distribution, and the heights of each dart would be uniformly distributed. In other words, Childs's description is entirely consistent with Xenakis having generated a large part of his composition with darts, with a single dart simultaneously generating the three random variables desired. So much for formalized music.

Let me quote another composer, Pierre Schaeffer, quoted in Childs's paper:

As far as Xenakis is concerned, let me emphasize
at once that I’d be much more interested in his research
if he hadn’t set out so obviously to reduce its
accessibility and its credibility in a manner which is
immediately apparent as soon as you open his book
on formal musics.

Was Xenakis using mathematics to hide his composition methods from other, less mathematically savvy, composers?

Nonetheless, whatever his composition methods, after listening to Xenakis I seem to be finding other types of music to be far too clichéd and predictable. I think I have developed a liking for this composer. If you hear what seems to be the sound of pneumatic drills, thermal lances and scraping metal as you drive over the San Francisco Bay Bridge in the morning, look around, it might not be the bridge retrofit, but instead me commuting to work listening to one of Xenakis's electroacoustic works.

(PS If you're wondering, the fourth distribution, that I omitted, was the Maxwell-Boltzmann distribution, which Xenakis used to generate the speeds of the glissandi.)

Labels:

Wednesday, July 05, 2006

Even More Numb3rs

There are so many blogs about the TV series Numb3rs out there, it hardly seems worth it for me to write about it myself, especially as it has now been renewed for its third season. But when I talk to my mathematical inclined colleagues, very few of them have actually watched the series. Mathematics on TV is so incredibly rare that I would have thought that they would jump at the chance to see a popular TV show with a high mathematical content. So clearly word isn't getting out and I'm going to talk about it anyway.

Numb3rs is yet another FBI series with the protagonists solving crimes. What makes it different is that Charlie, the brother of the lead FBI agent, is a mathematician who consults for the agency. What's astonishing about the series is that week after week, Charlie uses mathematics to solve crimes. He's a mathematical crime fighting superhero. What's more, I've watched most of the first series, and the mathematics used in each episode actually has a degree of plausibility.

You might not be convinced. So here's a quick summary of the plot of the pilot episode (on the DVD of season one):
<SPOILER>

Charlie tries to track a serial killer by fitting a probability distribution to the attacks, with the assumption that the killer lives at the 'centre' of the distribution. It fails to produce a lead. But then Charlie has the flash of inspiration that he should be looking for a bimodal distribution with two peaks. He reworks the data and finds both where the killer lives and where he works. This is prime time TV. 10pm Friday night (where I live). We have a TV show where the plot hinges on how many local maxima a probability density function has. I don't know about you, but I find this quite unbelievable. But amazingly, this is a real TV series.
</SPOILER>

The show isn't problem free. I'm not exaggerating when I describe Charlie as a superhero. He solves mathematical problems in a new field overnight that would take experts days, weeks or months. The editing is pretty choppy - we cut from scene to scene with rapid fire explanation leaving viewers without enough time to assimilate information. (Watching on DVD so you can rewind and pause helps.) The scripts feel a little like writing by numbers (so to speak). You feel a little too aware of when the writers have gone into "character development mode" or "action mode" or "explain for the benefit of the audience mode" and the character development is all standard stuff.

I have to say that I'm pretty impressed with the inventiveness of the writers in creating mathematics related plots. My dream job would be creating mathematical or scientific ideas for TV shows (I could have invented much better technobabble than Heisenberg compensators and there have been countless scripts that I'd have loved to have touched up.) But I doubt I could have created as many plots as the Numb3rs team have managed. And despite the exaggeration, they've done so without making the mathematics completely preposterous (at least not in the first season).

And one last quibble: does Charlie have to say "statistical analysis" and "equation" so often?

Labels: