### Aliasing and the Heisenberg uncertainty principle.

**TL;DR**

**The Dirac comb is an example of a wavefunction whose position and momentum aren't fuzzy.**

**Introduction**

I think many people have a mental picture a bit like this:

Here's another way of thinking about that kind of picture (assuming some units I haven't specified):

position | = | 123.4??? |

momentum | = | 65?.??? |

The idea is that the question mark represents digits we don't know well. As you move towards the right in the decimal representation our certainty in the accuracy of the digit quickly goes downhill to the point where we can't reasonably write digits.

But this picture is highly misleading. For example, the following state of affairs is also compatible with the uncertainty principle, in suitably chosen units:

In other words, it's compatible with the uncertainty principle that we could know the digits beyond the decimal point to as much accuracy as we like as long as we don't know the digits before the point. It trivially satisfies Heisenberg's inequality because the variance of the position and the momentum aren't even finite quantities.

But being compatible with Heisenberg uncertainty isn't enough for something to be realisable as a physical state. Is there a wavefunction that allows us to know the digits to the right of the decimal point as far as we want for both position and momentum measurements?

Maybe surprisingly, the worlds of audio and graphics can help us answer this question. Here's what a fraction of a second of music might look like when the pressure of the sound wave is plotted against time:

But if we sample this signal at regular intervals, eg. at 44.1KHz for a CD, then we can graph the resulting signal as something like this:

The red curve here is just to show what the original waveform looked like. The black vertical lines correspond to regular samples and we can represent them mathematically with Dirac delta functions multiplied by the amplitude measured at the sample.

There is a well known problem with sampling like this. If you sample a signal that is a sine wave sin(ωt) at rate f then the signal sin((ω+2πnf)t) will generate exactly the same samples for any integer n. The following illustration shows what might happen:

The two waveforms are sampled at the same regular intervals (shown by vertical lines) and give exactly the same amplitudes at those samples.

This forms the basis for the famous Nyquist-Shannon sampling theorem. You can reconstruct the original signal from regularly spaced samples only if it doesn't contain frequency components higher than half your sampling rate. Otherwise you get ambiguities in the form of high frequency parts of the signal masquerading as low frequency parts. This effect is known as aliasing. As a result, the Fourier transform of a sampled function is periodic with the "repeats" corresponding to the aliasing.

In the audio world you need to filter your sound to remove the high frequencies before you sample. This is frequently carried out with an analogue filter. In the 3D rendering world you need to do something similar. Ray tracers will send out many rays for each pixel, in effect forming a much higher resolution image than the resolution of the final result, and that high resolution image is filtered before being sampled down to the final resulting image. The "jaggies" you get from rendering polygons are an example of this phenomenon. It seems like jaggies have nothing to do with the world of Fourier transforms. But if you compute the Fourier transform of a polygonal image, remove suitable high frequency components, and then take the inverse Fourier transform before sampling you'll produce an image that's much more pleasing to the eye. In practice there are shortcuts to achieving much the same effect.

Now consider a particle whose wavefunction takes the form of the Dirac comb:

This is a wavefunction that is concentrated at multiples of some quantity a, ie. ∑δ(x-an) summing over n = ...,-1,0,1,2,... If the wavefunction is ψ(x) then the probability density function for the particle position is |ψ(x)|². So the particle has a zero probability of being found at points other than those where x=na. In other words, modulo a, the particle position is given precisely.

But what about the particle momentum. Well the wavefunction has, in some sense, been sampled onto the points na, so we expect that whatever the momentum distribution is it'll be ambiguous modulo b where ab=ℏ. In fact, if we take the Fourier transform of the Dirac comb we get another Dirac comb. So in the frequency domain we get the same kind of phenomenon: the momentum is concentrated at integer multiples of b. So now we know we have a wavefunction whose uncertainty precisely fits the description I gave above. We know the position precisely modulo a and the momentum precisely modulo b. In some sense this isn't contrived: we know the momentum modulo b precisely

The message from this is that position-momentum uncertainty isn't fuzziness. At least it's not fuzziness in the ordinary sense of the word.

I'm not very experienced in attaching numbers to results from theoretical physics so I'd find it hard to say how accurately we can create a Dirac comb state in reality. When we measure a position using interferometry techniques we automatically compute the position modulo a wavelength so this isn't an unusual thing to do. Also an electron in a periodic potential may take on a form that consists of a train of equally spaced lumps. Even if not described exactly by a Dirac comb, we can still know the position modulo a and the momentum modulo b much more accurately than you might expect from a naive interpretation of the Heisenberg uncertainty principle as fuzziness.

1. Investigate approximations to the Dirac comb: eg. what happens if we sum only a finite number of Dirac deltas, or replace each delta with a finite width Gaussian, or both.

2. Investigate the "twisted" Dirac comb: ∑δ(x-an)exp(inθ) where θ is some constant.

position | = | ...???.123... |

momentum | = | ...???.654... |

In other words, it's compatible with the uncertainty principle that we could know the digits beyond the decimal point to as much accuracy as we like as long as we don't know the digits before the point. It trivially satisfies Heisenberg's inequality because the variance of the position and the momentum aren't even finite quantities.

But being compatible with Heisenberg uncertainty isn't enough for something to be realisable as a physical state. Is there a wavefunction that allows us to know the digits to the right of the decimal point as far as we want for both position and momentum measurements?

**Sampling audio and graphics**Maybe surprisingly, the worlds of audio and graphics can help us answer this question. Here's what a fraction of a second of music might look like when the pressure of the sound wave is plotted against time:

But if we sample this signal at regular intervals, eg. at 44.1KHz for a CD, then we can graph the resulting signal as something like this:

The red curve here is just to show what the original waveform looked like. The black vertical lines correspond to regular samples and we can represent them mathematically with Dirac delta functions multiplied by the amplitude measured at the sample.

There is a well known problem with sampling like this. If you sample a signal that is a sine wave sin(ωt) at rate f then the signal sin((ω+2πnf)t) will generate exactly the same samples for any integer n. The following illustration shows what might happen:

The two waveforms are sampled at the same regular intervals (shown by vertical lines) and give exactly the same amplitudes at those samples.

This forms the basis for the famous Nyquist-Shannon sampling theorem. You can reconstruct the original signal from regularly spaced samples only if it doesn't contain frequency components higher than half your sampling rate. Otherwise you get ambiguities in the form of high frequency parts of the signal masquerading as low frequency parts. This effect is known as aliasing. As a result, the Fourier transform of a sampled function is periodic with the "repeats" corresponding to the aliasing.

**The connection to physics**Now consider a particle whose wavefunction takes the form of the Dirac comb:

This is a wavefunction that is concentrated at multiples of some quantity a, ie. ∑δ(x-an) summing over n = ...,-1,0,1,2,... If the wavefunction is ψ(x) then the probability density function for the particle position is |ψ(x)|². So the particle has a zero probability of being found at points other than those where x=na. In other words, modulo a, the particle position is given precisely.

But what about the particle momentum. Well the wavefunction has, in some sense, been sampled onto the points na, so we expect that whatever the momentum distribution is it'll be ambiguous modulo b where ab=ℏ. In fact, if we take the Fourier transform of the Dirac comb we get another Dirac comb. So in the frequency domain we get the same kind of phenomenon: the momentum is concentrated at integer multiples of b. So now we know we have a wavefunction whose uncertainty precisely fits the description I gave above. We know the position precisely modulo a and the momentum precisely modulo b. In some sense this isn't contrived: we know the momentum modulo b precisely

*because*of the aliasing that results from knowing the position modulo a.**What this means**The message from this is that position-momentum uncertainty isn't fuzziness. At least it's not fuzziness in the ordinary sense of the word.

**And in reality**I'm not very experienced in attaching numbers to results from theoretical physics so I'd find it hard to say how accurately we can create a Dirac comb state in reality. When we measure a position using interferometry techniques we automatically compute the position modulo a wavelength so this isn't an unusual thing to do. Also an electron in a periodic potential may take on a form that consists of a train of equally spaced lumps. Even if not described exactly by a Dirac comb, we can still know the position modulo a and the momentum modulo b much more accurately than you might expect from a naive interpretation of the Heisenberg uncertainty principle as fuzziness.

**Exercises**1. Investigate approximations to the Dirac comb: eg. what happens if we sum only a finite number of Dirac deltas, or replace each delta with a finite width Gaussian, or both.

2. Investigate the "twisted" Dirac comb: ∑δ(x-an)exp(inθ) where θ is some constant.

## 9 Comments:

Have you looked in to the symplectic camel?

I hadn't heard of it until now. It's pretty neat.

I'm not sure that abs(psi(x)^2) is a distribution function. If all deltas are equally weighed, the integral over (-infinity, infinity) is infinite, but, for it to be a probability distribution, it should be equal to 1.

Could you make that clearer?

@vklj,

Welcome to physics :-)

Physicists tend not to worry about such things because there's a variety of different ways to deal with it and they all end up having no impact on the final result (*).

The Dirac comb can be seen as an idealised limit of a bunch of well behaved finite models. In this case physicists usually consider a finite region of space L (possibly wrapping around on itself) and allow L to "go to infinity". The Dirac comb can be seen as a limiting case in other ways too.

This is a paper that deals explicitly with wavefunctions like this: http://www.cce.ufes.br/jair/estsolpg/PhysRev686_Zak_Dynamics_Bloch_Electrons.pdf

(*) Caveat: One of my reasons for thinking about physics at the moment is that I'm trying to understand topological insulators. Part of it is a bunch of phenomena that would have been discovered much earlier if people had realised that finite size assumptions do sometimes have a big impact.

@Derek,

Symplectic spaces bound how much stuff can fit in a 3D space too. I only just learnt about the (very old) concept of etendue: http://en.wikipedia.org/wiki/Etendue In practice it puts practical limits on things like how much light you can collect in a solar cell. Very interesting.

Ah yes, for optics, one usually uses contact geometry which is the odd-dimensional analog of symplectic geometry.

This is exactly the type of wave packet hypothesized by Edwin Jaynes when talking about the Schwartz-Hora 'blue electron effect' in "Scattering of Light by Free Electrons as a Test of Quantum Theory". In some experiments, electrons excited by a laser emitted light on collision with a distant conductor. Jaynes speculated that this could be explained by attributing a comb structure at an optical wavelength to the free electron. Hestenes suggested that it may have to do with zwitterbewegung (in which case the extended structure just approximates time-delayed interactions of the electron with its own field). Sounds like we really need an experimental review on the subject, but I have not seen one.

David M Rogers makes a good point. The experimental community ran away from the Schwartz-Hora effect and also from the suggestions of Jaynes. Younger folks may not realize why. I think it is due to a prevailing attitude among physicists working post 1980 that "everything had been discovered". This was when the phrase "brilliant confirmation" started popping up in science news articles. XYZ ... and experiment which told us nothing new, became a "brilliant confirmation". This confirmation bias is very human and has been studied a great deal in behavioral finance. Physicists of one generation had a real bad case of this. However, as anybody who knows history will tell you... Science proceeds by "brilliant disconfirmation". It is when we find something new and puzzling that we should pay attention. Things are changing now. Any young experimentalist who would like to win a Nobel Prize should go read Edwin T Jaynes and take the advice of David M Rogers. Like many areas of physics today there is a Nobel Prize going begging here. The older folks were simply too scared to touch this one.

Have you read Kevin Brown's article

"Aliasing and uncertainty" http://www.mathpages.com/home/kmath152/kmath152.htm6-

Post a Comment

## Links to this post:

Create a Link

<< Home