book reviews
More optics booksreviewed by T. Nelson |
Reviewed by T. Nelson
The first two chapters of this multi-author book are so badly written, with undefined acronyms and unintelligible formulas, that I suspected it to be part of a new trend to get rid of readers who aren't really determined enough to plow through it.
And I was right: on page 94, like the sun breaking through the clouds, the presentation improves, the writers discover the existence of Grammarly, the acronyms are defined, and we finally find out what in the heck they're talking about.
Some of it turns out to be old technology: despite the 2024 publication date, most of the stuff is pre-2016. But some of it is exciting technology that would revolutionize photography if somebody could ever get it to work.
Ghost imaging is a way of generating an image using ultrasensitive photomultiplier tubes instead of a camera. This means there are no lenses; a laser beam is split so that most of the photons are used as a reference beam and never touch the sample. A good explanation is in Yanhua Shih's review article at arXiv.
It works by two-photon optics, which can be produced from a laser by a process called spontaneous parametric down-conversion (SPDC). A small percentage of the photons go to the sample and are detected with a single-pixel detector (SPD) such as a Geiger avalanche photodiode or a superconducting nanowire detector. The rest go to a reference beam, which has a second SPD that is scanned in a raster pattern. This means that when a coincidence detector finds a match, the position of the photon on the sample is known.
The main benefit is to give us a new way of thinking about classical lenses: according to this new understanding, at each point on a conventional lens image, all the waves interfere except for the ones corresponding to the geometric ray. In ghost imaging, this happens by nonlocal quantum interference. The result is high spatial resolution and efficient use of every photon from the sample. However, since it lacks the redundancy of multiple photons per pixel, the image has a low (1-bit) pixel depth, which accounts for its ghostly quality. Not surprisingly it can also create a huge background, especially if ordinary everyday non-entangled photon pairs are used.
Ghost imaging could be useful for light-sensitive samples. NASA once explored it for potential imaging of extremely faint sources in space. But for actual ghosts, assuming they exist, something else is needed.
Tomographic imaging has been around for decades. In tomography, a formula called a radon transform gives the relation between functions and projections from the beam. CT reconstruction is a way of transforming object space and radon space into each other. Fourier transforms are essential for this, as they easily convert a time-dependent signal from radon space to a position-dependent signal in object space. Since the goal is to minimize radiation exposure to the patient, a variety of reconstruction algorithms have been devised. The author compares their advantages and disadvantages.
Holography is another well understood technology: light from a laser is split into two beams (reference beam and object beam), then recombined to form an interference pattern which is recorded on film. It's possible to create one of those in a computer, which could make transmission and storage of three-dimensional images possible. If somebody ever invents a practical way of displaying images in 3D, it would be a great way of using up all that extra bandwidth.
Two related technologies are lensless holographic imaging and super-resolution holographic imaging. The challenge in most of these techniques is to retrieve the phase from the detector—visible light detectors are insensitive to phase, so sophisticated computer algorithms are needed. One trick is to encode the depth in the interference pattern using something called 3D structured light.
Probably every kid has figured out that you could see around corners if you could computer-process the reflections that reach the camera indirectly. It is, of course, easier said than done. It's much easier if you can illuminate the subject with a laser. Doing it the ordinary way with passive non-line-of-sight imaging would give much lower S/N ratio. Another problem is that the equations are ill-defined and can have multiple solutions. The computational complexity increases as the fifth power of the number of scanning points. Thermal IR blackbody radiation self-luminescence might help a little by eliminating the need for a light source. But for now the preferred solution is terahertz imaging, which can see through the walls—sort of—and needs much less computer power.
Light field cameras use microlenses that defocus each pixel into a short line of pixels, where the position on the line represents the angle the ray came from. This tells us how far away it is. It also lets you re-focus the image after taking a picture. But that can only be done in software and you lose spatial resolution: the number of discrete focusing steps is proportional to the loss of resolution in the x- or y-direction. An affordable light field camera was sold commercially for a while, but it seemed to have a limited market.
Fluorescence imaging is also old technology, as is adaptive imaging, which is heavily used by astronomers to correct for distortion caused by atmospheric turbulence. The latest development here is wavefront sensorless adaptive optics, which eliminates the need for wavefront sensors to determine how to change the shape of the lens. Lensless imaging, which uses zone plates, is also well understood.
No index, no table of acronyms. Many color graphs and diagrams.
nov 29, 2024