Shorter route to solar cells
(appeared in May 2017)

(link to main website)

The split-second hologram would take us to places that 3 D reality displays are trying to reach, says S.Ananthanarayanan.

The representation of the lion at the Make in India show at the Hannover Messe in 2015 was a spectacular display using Augmented Reality technology. This and allied methods have changed the character of teleconferencing, advertising and entertainment. They use computer generated audio and video effects that recreate the environment of distant places. They also relay images of the reactions of participants so that they can respond to each other as if they were face to face.

The reality they create, however, when it is on a screen, is not real 3D, except that views from all angles are readily available and the images are of high quality. Objects can thus be rotated and the viewer can speed behind corners, but the screen looks the same from whichever angle it is seen. The imaging that creates real 3D, where each eye, without the use of special glasses, sees a distinct image and the mind can perceive depth, has been possible only with the hologram, which works with the help of lasers. The trouble with holography, however, is that holograms are static pictures and take time to create. It is hence not useful for important applications, like transmitting images for medical or surgical operations, which need to show moving pictures

Prof Nasser Peyghambarian and his group in the University of Arizona had published in the journal, Nature, a method of creating holograms every two seconds. This may have been the first step in speeding up the process to enable real 3D and moving representation. Ritesh Agarwal and others at the university of Pennsylvania now report in Nano Letters, the journal of the American Chemical Society, a medium that switches between three hologram images on being stretched. This could help design new displays and speed up display or transmission.

An improved method of display, currently in use, is Virtual Reality, where the user wears a head-set to experience real depth and intense ‘immersion’ in the target locale. The device has goggles that present separate images to each eye and the images could be real-life or computer-processed animation. The image has depth, like a real-life view, but is still not true 3D, as one cannot move around to get a view from a different angle. Virtual reality has found success for personal entertainment but not for practical use as it does not allow users to communicate, or to participate in the action they are viewing.

Another development is Augmented Reality, which is not real 3D, but different views of real objects, or computer creations, which are projected on transparent screens. With special optical and sound effects, and the images being rapidly refreshed by computers, the impression is created of the viewer being surrounded by objects, movement, buildings or a landscape. This was the technique used to create the illusion of a lion from India walking among the audience, and then morphing into a mechanical assembly of machine parts, at the opening ceremony of Hanover Messe. The same class of techniques also allowed Mr. Narendra Modi, during his election campaign, to present himself, apparently in person, before dozens of far-flung audiences at the same time.

In the field of engineering too, 3D software takes a set of engineering drawings and creates 2D images of the finished product, building, bridge, tower, etc. The software then allows the image to be rotated and turned around, to provide multiple views. The viewer can also zoom in to any specific part, for a closer look, and so on. The software even allows the user to make changes in the 3D view, so that corresponding changes are carried out in all the related 2D drawings, for use by the engineers.

True 3D, or an image that looks like a real set of objects with depth, so that one object moves before or behind another, when a viewer moves her head, however, is created only with the help of the hologram. The hologram works by capturing not the image of an object, as received at a sensor or set of sensors, but by capturing the wave-front of light waves that emerge from the object. If the same wave-front is then created again, there is no way any sensors, like a pair of eyes, can tell that what they see is not the real object itself. Recording the wave-front, however, is easier said than done, because the light that falls on things is a mixture of light waves that are in all stages of wave motion and there is no single wave front to capture. This problem can be overcome by illuminating the scene to be captured with laser light. While the waves of the laser are synchronised, or ‘in step’, there is still the problem of capturing a wave-front. This is overcome by bringing in two sets of light waves, those that come directly from the source laser and the waves reflected from the objects, to fall on a light-sensitive screen.

There is thus the interaction of two wave-fronts that arrive at the screen. At (each different) every point on the screen, the two wave-fronts would either add, to get stronger, or cancel, to get weaker. The screen is hence covered with a pattern of dark and bright portions, like a bar-code, and this distribution captures the relationship of the illuminating light and the light that has been reflected by the objects illuminated.

Now, if this screen, with the pattern of dark and bright parts, is again illuminated by a beam from the same laser as before, the wave-front that emerges would have the same intensity distribution as the wave-front which originally spread over the screen. A person who looks at the screen would thus see, by the same, original wave-front, the same objects as before. As the pattern on the screen, and hence the wave-front, does not depend on where the viewer is located, any viewer would see the original objects as if they were physically there.

The problem, however, is that it takes time to print the pattern, which is the hologram, on the screen and it is a static pattern. To show motion, we need to create a hologram every sixteenth of a second, and then to run through the holograms at the same rate, so that the eyes see continuous motion.

Nasser and Ritesh

The breakthrough reported by the team at Arizona was to record holograms using different coloured lasers on the same polymer sheet, and to do it within two seconds. Creating a hologram every two seconds would not be fast enough to smoothly record a scene that is in motion, but it may be the start of a process by which the time taken is reduced. The succession of holograms could then be transmitted, to allow a distant viewer to experience the action as if she were personally there. This may lead to applications in telemedicine, advertising, prototyping, updatable 3D maps and entertainment, the Arizona authors say.

The Pennsylvania group created a hologram of geometric shapes using nanoparticles embedded in a flexible sheet. A sheet whose components and thickness have the same physical dimensions as the wavelength of light can mould and shape a wave-front that falls upon it. The group created a surface with gold nano-rods embedded in synthetic sheet, so that the optical properties of the layout of particles could be changed by stretching the sheet. A computer-generated hologram of a pentagon, which was created on the sheet, could then change to a square, and then to a triangle, when the sheet was stretched. The discovery suggests the possibility of holograms that could be manipulated and the electronic transmission of 3D information for display using a re-configurable hologram.

----------------------------------------------------------------------------------------

Do respond to : response@simplescience.in