One of the fun things about working in the visual effects industry is the large number of disciplines involved, especially when working for a smaller company where everyone has to do everything. At one point many years ago we did some work on simulating camera artifacts including diffraction spikes. Although the wikipedia page is in the context of astronomy you can get diffraction artifacts with any kind of camera, and you'll get spikes whenever the iris has a polygonal shape. I hope to sketch why later.
I noticed an intriguing question about a diffraction pattern on Quora. (For those not following the link: this photo is a picture of a reflection of the photographer and camera in an LCD screen.) My immediate thought was "how could I have faked this in a render?".
My main interest was in the 'X' shape. And the first question is this: how do I know this is a diffraction effect? It might be that there are specular reflections off periodic structures in the LCD display orthogonal to the arms of the 'X'. This would explain everything with only geometric optics.
There are two big clues I know of that diffraction is at work: we have separation of white light into separate colours. This is typical of diffraction effects. We also see a discrete structure: rays directed along a discrete set of directions in addition to the arms of the 'X'. This is typical of diffraction from periodic microstructures, and again I hope to sketch why later.
So how can we explain the appearance of this photograph?
A simple model
If we consider diffraction effects we need to model light as waves. Let's start by working with just one frequency of light from the camera flash. Let's also suppose that the flash is far enough away from the monitor that the spherical wavefronts can be modelled as plane waves when they meet the monitor. Here's a picture:
The thick black vertical line is the monitor screen. The other vertical lines are the incoming plane waves. And the red arrow shows the direction of motion. Huygen's principle tells us that when the waves reflect off the monitor surface we can treat the entire monitor surface as a collection of points, each of which is emitting spherical waves equally in all directions. If you're unfamiliar with Huygen's principle it may seem wrong to you. When you shine a light at a mirror we get reflection along one particular direction, not in all directions. That's true, but we can view this fact as a consequence of interference between all of the little points on the surface emitting in all directions. Again, that's another thing that will emerge from the calculation below.The question I now want to answer is this: how much light gets reflected in direction k? I'll assume we're observing things from far enough away that we can neglect the details of the geometry. We'll just compute the contribution in direction k (a unit vector (l, m, n)) from all of the points on our surface. We'll assume that because of variations on the surface, the intensity of the emission varies along the surface. We'll model the emission as a function I, so I(y, t) is the emission from position y at time t. We want to collect the light going in direction k through some light collector at time t. Call that J(k,t). Here's a picture:
Remember: there are waves going in all directions, but I've just drawn the waves going in direction k.
Fourier optics for beginners
Now comes the interesting bit: the light arriving at the collector is the integral of all the light emitted from the points on the monitor surface. But the time of flight from each point on the monitor surface is different. So when we perform our integral we need to build in a suitable delay. It's straightforward geometry to show that the time delay between waves arriving from y₁ and y₂ is my/c where c is the speed of light and m, as defined above, is the y-component of k. So, up to a constant of proportionality, and some absolute choice if time, we want this integral
J(k,t) = ∫ dy I(y,t-my/c)
We assumed that the monitor surface was being struck by plane waves. Assuming they were orthogonal to the surface and coherent this means that I is proportional to exp(iωt). The time dependence in J is just the sinusoid, so we drop that from further calculations. So we can write, ignoring another constant of proportionality,
J(k) = ∫ dy I(y)exp(-iωmy/c) = ∫ dy I(y)exp(-i2πmy/λ)
Where I used ω = 2πc/λ and λ is the wavelength of the light. In other words, J is the Fourier transform of I. More generally, if we modelled the z-axis we'd find that J is given by the 2D Fourier transform of I as a function of the 2D surface of the monitor. The actual power striking the sensor is given by the absolute value of the square of J.
Ordinary reflections
So now I can pop the first of my pending explanations off the stack. Suppose that the surface is completely uniform. Then I(y) = constant. The Fourier transform of a constant function is the dirac delta function. In other words - despite the fact that we're modelling every point on the surface as its own emitter, the resulting reflection from a perfectly smooth surface is a wave going straight back where it came from. (Exercise, modify the above treatment to show that waves striking a mirror at an angle obey the usual reflection law for mirrors.)
Periodic structure
Now suppose that the surface of our monitor has a periodic structure. The Fourier transform of a periodic function is concentrated at spikes corresponding to the fundamental frequency of the structure and its overtones. So we expect to see a grid-like structure in the result. I believe that's what we're seeing in the spray of quantised vectors coming from the center of the image and the fact that the arms of the 'X' look like discrete blobs of colour arranged in a line rather than the white spikes in the astronomical example I linked to above. That tells us we're probably seeing artifacts caused by the microstructure of the LCD pixels.
An edge
Consider the function I(y,z) = -1 for y<0 and 1 for y>0. In other words, the sgn function along the y-axis and constant along the z-axis. Up to multiplication by a constant, its Fourier transform is given by the dirac delta along the z-axis and 1/ω along the y-axis. On other words, it's a spike, starting at the origin, extending along the y-axis, but fading away as we approach infinity. This is the source of diffraction spikes. Any time we see such a spike it's likely there was a perpendicular edge somewhere in the function I. For telescopes it often comes from the struts supporting the secondary mirror. For cameras it comes from the polygonal shape of the iris. In this case, it looks like we must have pixels whose shape has edges perpendicular to the 'X'. I have to admit, when I came to this conclusion it sounded very implausible. But then someone posted this image from here:
There are those edges at the predicted angles.
Simulating the artifact
Now we have enough information to simulate the artifact. You can think of the remainder of this article as written in "literate octave".
Save the above close-up of the pixels in a file and read it in.
I = imread('lcd-pattern.jpg');
I crop out one quarter as it simply repeats. This also removes the text.
I1 = I(251:500,251:500,1);
Now clean up some stray (image) pixels and pick out just one (LCD) pixel.
I1(1:250,100:250) = 0;
The photo is blurry, probably has compression artifacts and seems quite noisy to me. So I threshold it at intensity 200 to get a hard shape. I then compute the discrete Fourier transform and scale the intensity to something nice.
I2 = 0.0001*abs(fft2(I1>200).^2);
The Fourier transform puts the origin in the corner so I shift it to the centre.
I2 = circshift(I2,[125,125]);
Now we need to do the same thing for red and green. We reuse the red image but it needs scaling in proportion to the wavelength. The numbers 1.2 and 1.4 are intended to be the ratio of the wavelengths to that of blue light. They're just ballpark figures: I chose them to make the image sizes convenient when rescaled. I also crop out a piece from the centre of the rescaled images so they line up nicely. I use imresize2 from here.
Ib = I2;
Ig = imresize2(Ir,1.2,'nearest')(26:275, 26:275);
Ir = imresize2(Ir,1.4,'nearest')(51:300, 51:300);
Now I assemble the final RGB image.
J = zeros(250, 250, 3);
J(:, :, 1) = Ir;
J(:, :, 2) = Ig;
J(:, :, 3) = Ib;
imwrite(J,'cross.jpg')
Here's the result:
Analysis
Not bad, but it's not perfect. That's unsurprising. Have a look at the thresholded image straight after thresholding. It's a very poor representation of the pixel shape. It would be better to redraw the shape correctly at a higher resolution (or even work analytically, not infeasible for simple shapes like these pixels). Stray (image) pixels can make a big difference in Fourier space. That explains the 'dirtiness' of my image. Nonetheless, it has the big X and it has the rows of dots near the middle. Qualitatively it's doing the right thing.
Note that the big X in the middle seems to have ghost reflections at the left and right. These are, I think, aliasing and purely artifacts of the digitisation.
It'd look slightly better with some multi-spectral rendering. I've treated the pixels as reflecting one wavelength but actually each one reflects a band.
There are also, almost certainly, lots more effects going in that picture. The close-up photograph is only revealing one part of the structure. I'm sure there is 3D structure to those pixels, not just a flat surface. I suspect that's where the horizontal white line is coming from. Because it's white it suggests an artifact due to something interacting equally with all visible wavelengths. Maybe a vertical edge between pixels.
But overall I think I've showed that Fourier optics is a step in the right direction for simulating this kind of effect.
But overall I think I've showed that Fourier optics is a step in the right direction for simulating this kind of effect.
Note
Everything I know about Fourier optics I learnt from my ex-colleague Oliver James.
All the errors are mine of course.
Tricks with macroscopic structures exploit similar phenomena. Check out the Bahtinov mask, a trick used by astrophotographers for achieving a good focus for an example you might like.
ReplyDeleteJerry,
ReplyDeleteI was just wondering how to view Fourier transforms of shapes in 'hardware'. Silly me. I just need to cut out a hole (or print on transparency) and stick it in the end of my Dobsonian. Thanks for reminding me.
I've always thought the best way to demonstrate the properties of the Fourier transform was to build a tangible interface by exploiting far-field diffraction twice. Once to transform to the (spatial) frequency domain, where you could insert a mask to manipulate the signal (and see the transform) and another to transform back so you can immediately see the results.
ReplyDeleteOf course people are missing the real question: why are the LCD pixels shaped this way?
ReplyDeleteDerek,
ReplyDeleteIn the days of yore there was a nice toolkit of techniques that people used to use. One that I find impressive was doing phase correlation optically in order to align a pair of images. But I've not been able to find references on the web. Maybe somewhere in books.google.com if I search on dates before the advent of modern computers...
Derek,
ReplyDeleteThey look like little helices viewed from the side - but I don't think they could be anyhin that complex. I feel like Crick and Waton trying to recognize a helix in their x-ray crystallography data.
A few years ago I had an idea for a ray tracer that would simulate
ReplyDeleteeach photon's quantum contribution to the resulting image. It would, of course, require vastly more processing time than conventional radiosity methods, which simulate photons en masse. But such a ray tracer would generate images with diffraction patterns, reflection, refraction, chromatic aberration, and so forth, without their having to be programmed specially.
Here's the original note I wrote to myself in Feb 2000 about the idea. I hope that you will find it interesting.
20000218
A ray tracer which simulated the QED level. It would generate a photon of a certain color, then let it travel at random, the compute its contribution to the result. After generating many many paths, the unimportant ones would tend to cancel out, just like in the real world.
Such a tracer would provide accurate rendering of things like diffraction gratings, pinholes, soap bubbles, x-ray crystallography, prisms, lenses, etc.
Does it even know that photons travel in straight lines? You could make this an optional feature. Disabling this knowledge would make
the simulation run slower, but would enable accurate simulations of photons in gravity wells.
mjd,
ReplyDeleteYes, I 've wanted to do that too. For audio too. But these things tend to have extremely poor convergence.
This may be something that could be carried out better on a quantum computer, though I won't be holding my breath waiting for it to appear.
Interesting post :)
ReplyDeleteThe horizontal line is probably a smear artefact caused by an overloaded CCD sensor.
Out of curiosity, if I send you a picture of my monitor with a flash, will you be able to determine what kind of pixels I have by performing some IFFT?
ReplyDeleteSam,
ReplyDeleteWhen we take a photograph like this we get the intensity of the light but we lose the phase information of the lightwaves ie. the timing of the arrival of the wavefronts. We need this to do a full reconstruction.
But having said that, there is a large literature on recovering the phase information. But you need very high quality data (which this isn't) and you need *some* knowledge about the kind of structure you're looking at. I know nothing about these methods.
Chemists who are taught X-ray crystallography get exercises that involve things like recognising letters of the alphabet from their Fourier transforms. It's easy to spot symmetries and edge orientations in exercises like this. But if you didn't know they were letters of the alphabet I think it would be too difficult to solve.