More ways have been found to make out a dim image obscured by the background, says S.Ananthanarayanan.
The medium that lies between an object and the eye, or the camera, affects the light that comes from the object. This would blur the desired image, especially when it is feeble or small in size. Examples are when we try to view body tissue through a layer of skin, or when astronomers try to spot a tiny planet against the glare of the mother star, or even when driving through the fog.
A number of methods, either to brighten the image or reduce the intensity of the light scattered by the medium, have been developed. The emerging field is now known as ‘adaptive optics’, but many methods are either impractical or not really good enough. Optica, the journal of the Optical Society of America, carries two reports, one by Anat Daniel, Liat Liberman and Yaron Silberberg, of the Weizman Institute of journal of Science, Israel, and the other by Edward Haojiang Zhou, Atsoushi Shibukawa, Joshua Brake, Haowen Ruan, and Changhuei Yang, from the California Institute of Technology, which describe more effective ways of neutralising the disturbance created by the intervening medium while viewing things.
The reason why images get obscured when light passes through a turbid medium is that the light wave emerging from an object is affected by tiny particles that are randomly distributed in the medium through which light passes. Each particle becomes a centre of scattering, or a source fresh of light waves, and the leading outline of the original wave gets distorted. The result is that the rods and cones in the eye, or the pixels in the camera, do not seethe original light from the object and the image is obscured.
The method of the Weizman Institute group to deal with this problem is to turn around an earlier method where the light that illuminates an object was muddied by the dispersing medium even before it fell on the object. The returning light, which came from the object, was then viewed through the same medium, so that the obscuring action was reversed! The light used for illumination was a laser beam that shone through the dispersing medium and the object was viewed with fluorescent light that the object emits when illuminated.
When a laser beam, which is a train of light waves that are in step, or ‘coherent’, passes though the scattering medium, there is interference of the sources of emerging light, and this forms a collection of dark and bright spots, known as ‘speckle’. The distribution of the speckle, however, is not random, as it arises from specific scattering centres, and retains the information in the original beam as well as the structure of the scattering centres. Slightly changing the angle of the beam, hence, only makes small changes in the speckle pattern, as the distribution of the scattering centres is unchanged.
Now, when the fluorescent light from the object, in the experiment described, returns through the scattering medium, it meets the same scattering points as the illuminating beam and the effect of the scattering can be taken to have been ‘undone’. The image created, however, is still only a pattern of dark and bright spots, or the ‘speckle’. To retrieve the desired image, the angle of the illumination is varied and the resulting speckle pattern recorded a large number of times. This data, with the help of an approximate shape of the object, yields a slightly better picture of the object. This slightly better shape then leads to an even better shape, and so on, till an acceptable image is constructed.
What the Weizman Institute group has done, in place of reversing the change of the wavefront, is to engineer the wave-front of the original light even before it enters the scattering medium. The engineering is with the help of a device known as a Spatial Light Modulator – a matrix or a criss-cross of optical elements, like liquid crystal light sources, that can alter the intensity or the phase of light waves over an area. Even a transparency in a slide projector is an SLM, as it blocks or allows light at different points. For our purpose, however, we need to modify the action of each element, and an electronic array of elements that can be controlled is what is used.
When the scattering medium is lit by a laser beam, without any action of the SLM, it shows, as expected, a pattern of speckles. But now, if the SLM starts modifying the wave-front that falls on the disperser, there would be increase or decrease of the speckle effect, and the SLM can be tuned to get rid of most of the speckle.
In the actual trial, the object to be viewed was illuminated by a dim, white light source, as shown in picture 1, and glare was created by shining laser light through the passive SLM and then the dispersing medium. As a result of the glare, the image of the object could hardly be made out. When the SLM was worked, however, and the level of speckle reduced, it was seen that the image of the object got progressively clearer! The trial hence demonstrates that it is possible to reduce scattering by modifying the wavefront of light to compensate for a dispersing medium.
A real application would differ from this trial, as the object to be viewed would not be lit by a separate source, but by the same source that causes the glare. The method, hence, would not work in the case of a static object being viewed. In a real application, however, as in say blood corpuscles in a vein or a satellite around a distant star, the objects being viewed are not static but in motion. The action of the SLM would hence only reduce the scattering from the static surroundings of the blood corpuscles or from the star, and not the light from the actual objects to be viewed.
Another source of obscuring images is because of the reflection, or backscatter of light before it falls on the object. Even if the distortion of the reflected light from the object is reduced, the camera may not be able to resolve the dim image against the glare of backscatter. The California Institute of Technology group reports a successful technique of identifying the real image signal at each pixel of the camera by blacking out the glare signal with the help of a reference laser.
The arrangement is shown in the picture 2. The illuminating beam is a laser beam that goes partly through the diffusing medium to reach the object and is partly reflected back as glare. The light that reaches the camera is thus the part reflected by the object and the part reflected back by the medium. And then, there is the reference beam, which is derived from the same laser and whose path length is adjusted to be equal to that of the glare illumination. The light that falls on each pixel of the camera thus consists of the real image signal + the backscattered signal + the reference signal.
The way the real signal is fished out of this is by rapidly changing the reference signal through a spectrum of phase and intensity values. At the point, during this process, when the reference signal value is equal and opposite to the glare value, and hence cancels out the glare, the total value is the least and equal to the real signal. This process repeated, at high speed, over each pixel of the field and the real, glare free image is computed from the least intensity at each cycle. Better image quality is possible if the steps through which the reference beam is varied are closer together, which is easier if the object imaged is static.
The trial of this process, reported in the paper, improved the image of an object placed 2 mm behind a 1 mm thick screen that had particles with a diameter of 3 microns. While the technique is promising for microscopy, the researchers are examining ways to use it in astronomy, to see objects behind the opaque atmosphere of Venus, for example.
Do respond to : firstname.lastname@example.org