Monday, May 30, 2005

Dual Photography Part II

It worked. The following playing card was successfully read even though it wasn't in the line ofsight of any light sensor.




Rather than write anything here I'll just refer you to the article on my home page.


I now have a couple of mirrors mounted on steppers so I may see if it's possible to get better results with these than with servos.

Labels:

7 Comments:

Blogger Robbie said...

I wonder ...

If you were using a projector (instead of a laser), rather than turning the pixels on and off (like in the original paper), could the projector play a movie (or a sequence of slides)?

Consider the equation Ax=b; where, the elements of b are the readings of your light sensor, the elements of x are the pixels making up the image that the projector "sees", the rows of A represent a frame in the movie, and the columns of A are the pixels for each frame. In the special case where the "movie" consists of turning individual pixels on one at a time, A becomes the identity matrix.

Now all you have to do is play the movie and solve for x.

If you want to see intermediate results as the movie is being played, you could perform some wavelet (or Fourier) transforms to A and x. So now, for example, the first column of A could represent the average of all pixels in the frame, the second column the difference between the average of the upper half and the lower half, the third column the difference between the left and right, etc. The elements of x would have the same meaning. Now, taking leading matrices of A, and the first few entries of x and b, it would be possible to solve for the first few entries of x. Giving approximate images that improve as more of the movie is played.

As a practical measure, it might be best to find a best fit solution for x, rather than trying to solve exactly.

Saturday, 02 July, 2005  
Blogger sigfpe said...

If it were a movie at a resolution of M x N then it'd be 'easy' if the movie had enough frames that the frames form a basis for the 3MN dimensional space of possible images. As you say, it'd just be solving a linear system.

You can probably do it with fewer frames. Like in the original paper you can extract more information per frame if you can make assumptions about the spatial coherence of the image. I guess the wavelet transform is effectively an approach to doing this. Sounds very hard though!

Tuesday, 19 July, 2005  
Blogger Derek said...

BEWARE THE FOOL'S GOLD OF STRUCTURED LIGHTING!

Monday, 01 August, 2005  
Blogger sigfpe said...

What's wrong with structured lighting. We've used it at work with great success. Beware certain companies (that will remain nameless) that sell you data acquired by structured lighting.

Wednesday, 17 August, 2005  
Blogger Derek said...

I was sort of joking. Long time ago I was into stereo vision, and structured lighting was popular. The problem is that in real world situations, with natural lighting and things like that, SL isn't all that useful.

I suppose it's fine if you planning to use it in an environment with a minimum of ambient lighting.

Friday, 19 August, 2005  
Blogger André Santos said...

Hello!
My name is André Santos i´m portuguese and i want to start a robot.Your equibot sounds very interesting to me.
I want to know if you can help me with some more information about building and programing it,i would realy apreciate it!!!
Thank´s a lot,keep up the good work!!!


André Santos

Friday, 09 September, 2005  
Blogger sigfpe said...

Derek,

Have you seen Lemony Snicket? You know that for many shots the baby in that movie is computer generated, not real? How else do you get a baby to act with a snake? The 3D model was derived using structured lighting. It works. (I work for ILM who did the shots.)

Sunday, 09 October, 2005  

Post a Comment

<< Home