The idea in the paper is that a light source (and its characteristics) and a light sensor (and its characteristics) can be swapped and the sensor will record the same amount of light falling onto it. For example, suppose we have a scene illuminated by a laser, which produces a tight beam, and it is viewed by a photocell, which collects light over a wide area. Then the sensor responds as if it collected light from the same narrow beam originating from a lamp spreading light over the same wide area. And this is how we perform the card trick. We point a photocell at the page of the book and rasterize the card using a laser. As the laser scans, the photocell registers exactly as if it were a tightly focussed photocell scanning the card. There's nothing clever here - point a laser at a white object in a dark room and the rest of the room will be illuminated more than if you pointed the laser at a black object.
But I don't want to just claim this is easy. I intend to do it. So far I have hooked up a photocell to an analogue to digital converter on a microcontroller and I've tried 'rasterising' objects by hand. It's pretty clear from the microcontroller data that the sensor is correctly distinguishing light and dark regions on an object completely out of view. So it looks like my dual camera (cocamera?) should work. The only thing left now is to mount the laser on a servo-driven pan-tilt head so I can rasterise automatically. If it all works out I'll have pictures in Part II some time in the next couple of weeks.
Note that the dual in dual photography is the vector space dual.