I had a remarkable experience last week. A scientific paper crossed my desk that seemed to take a theoretical idea that I and my colleagues developed and made it work. This simply does not happen to people like me. On closer inspection, the imaging technique that was demonstrated is closely related to—but slightly different from—the ideas we had developed.
The reading of the day
Before we get to the good stuff, let me wax lyrical about imaging. Imaging is without a doubt our best scientific tool. Whenever it is possible to turn data into an image, we do it. Why? Because the cliché “a picture is worth a thousand words” is wrong: it is off by about two orders of magnitude. A picture is worth 100,000 words.
The proof is in progress. We’ve gone from telescopes and microscopes that use visible light to telescopes that create images from microwaves and radiowaves. We use electrons to create images of tiny features on surfaces, and we use electrons to image through thin samples and see the positions of atoms. We run tiny needles along surfaces to create images of surfaces at atomic resolution. And we combine huge magnets with microwaves to image blood flow in the brain.
We wouldn’t be putting in the effort to make all of that work if images didn’t help us understand things visually. New imaging techniques transform scientific disciplines, and the transformation of scientific disciplines often leads to new imaging techniques. If you can suppress your gag reflex, I feel the following sums it up: imaging is the alpha and omega of science.
All things small and wormy
In the early years of the 21st century, a new form of imaging was all the rage. Called stimulated emission depletion (STED) microscopy, it used laser light to see details that are normally too small. We won’t go into how STED works (you can find more details here), but it mostly does what it has promised to do.
The problem with STED is that it requires the sample to be labelled with dye molecules. Once your resolution is good enough, you have to wonder if you are learning more about how the dye attaches to a molecule of interest than you are about the molecule itself. It would be better to look directly at the proteins and fats and other goodies that make up the cell.
There are imaging techniques that do just that. One is called stimulated Raman spectroscopy (SRS). Raman scattering creates an image based on the way atoms within a molecule vibrate. A hydrogen atom attached to a carbon atom will vibrate at a specific rate or frequency so an image can be created by looking for the location of that frequency. This is true for most bonds within a molecule, so looking at different frequencies can tell you about other molecules.
Raman spectroscopy allows you to find these vibrational frequencies by looking for the energy lost by the photons in a laser beam. A laser is shone on the sample, and some photons are absorbed by molecules, setting them vibrating. The energy required to excite the vibration is much less than the photon’s energy, so what emerges is a photon with slightly less energy, and thus a redder color (lower frequency).
So we shine in blue light, and we get a bunch of photons back that are slightly redder, with the frequency difference corresponding to the vibrational frequency of the molecular bonds. In some cases, we can literally create an image that is color-coded by the chemical species.
All of this is great, but unlike STED, it doesn’t let you see tiny details—like normal microscopes, an SRS system still blurs everything below about 500nm. So my colleagues and I went searching for a process like those used in STED but which could be applied to various forms of Raman spectroscopy. We found three—maybe four—ways that could lead to a better view of small objects. We tried to get funding to do the experiment (several times) and failed. In the end, we gave up.
Ten years passed
And then the experiment was done—admittedly not quite like we described it, but pretty close.
A critical feature we considered was a process called saturation, which is a feature of STED imaging. A research group in Singapore showed that the lasers used in Raman imaging allow you to directly saturate the scattering process (something we didn’t think was possible, so we went for an indirect saturation process). With a bit of post-processing of the data, you can enhance the resolution of an image.
It works a bit like this: for a given laser power, you get a certain amount of signal out of your sample. Increase the laser power and the signal intensity grows. However, once saturation is reached, increases in laser intensity do not result in increases in signal. The researchers took advantage of this by rapidly modulating the power of their lasers—a nice smooth increase from zero to max power and back down again.
The signal from the sample won’t track these modulations. It tracks upward and then flattens out at saturation. When the input laser power drops below saturation, the signal tracks downward again.
By recording the signal modulation for each pixel of an image, you can pick out tiny features by how early they exhibit saturation as the laser beams scan over them. At the edge of the laser beam, where the maximum power is low, the signal never saturates as the power modulates. At the center of the laser beam, the signal saturates rapidly. A tiny feature can be picked out by measuring where saturation occurs and how that changes as the laser beam scans. The researchers have developed an automated way to do this.
Forward to the future?
It should be pointed out that the researchers use intensities of more than 100GW/cm2, which is definitely enough to burn most biological samples. To avoid burning, the laser beam has to be scanned quite rapidly. Even worse, with the laser powers currently being used, the researchers only get a resolution improvement of about 1.4 (so from 500nm to 350nm, for instance). To double that, they will need about four times as much intensity.
The high powers and poor power scaling generally make people skeptical that these ideas will ever be practical. At least that was my experience when attempting to get funding. But I’d say that this is only the start. Now we have a system that works, and all the other optical tricks that have been developed can be explored. I’m looking forward to the next 10 years.