In computing, we have pretty much come to accept the idea that smaller is better. The smaller the features on a chip, the more transistors we can place, the more computation can be done.
I’m pretty excited to present the opposite. Researchers are proposing an idea to make your computer bigger.
Don’t laugh, because the whole idea is awesome, innovative, fun, and has some interesting science in it. Overly excited Chris says it may even turn out to be practical (sensible Chris says it won’t).
Why, you may be asking, do we even want to do this in the first place? The real answer is to see if we can. But the answer given to funding agencies is thermal management. In a modern processor, if all the transistors were working all the time, it would be impossible to keep the chip cool. Instead, portions of the chip are put to sleep, even if that might mean slowing up a computation.
But if, like we do with video cards, we farm out a large portion of certain calculations to a separate device, we might be able to make better use of the available silicon.
Quick, make my room play
So, how do you compute with Wi-Fi in your bedroom? The basic premise is that waves already perform computations as they mix with each other, it’s just that those computations are random unless we make some effort to control them.
When two waves overlap, we measure the combination of the two: the amplitude of one wave is added to the amplitude of the other. Depending on the history of the two waves, one may have a negative amplitude, while the other may have a positive amplitude, allowing for simple computation. The idea here is to control the path that each wave takes so that, when they’re added together, they perform the exact computation that we want them to.
The classic example is the Fourier transform. A Fourier transform takes an object and breaks it down into a set of waves. If these waves are added together, the object is rebuilt. You can see an example of this in the animation below (click the image to start it).
The Fourier transform—or other, similar transformations—is key to image processing. To put it in perspective, a typical Fourier transform of a million pixel image will take about six million operations. (That number comes from a general argument about how many operations are required to perform a Fourier transform, not actual measurements based on a computer architecture. Nevertheless, the argument is generally correct.)
Optically, a Fourier transform is performed by a lens. So an LCD screen displaying a picture can be Fourier transformed using a lens and a detector. Excluding readout times, the computation time is the length of time it takes to gather sufficient intensity at each pixel, which could be as little as a few femtoseconds. More importantly, all Fourier transforms performed in this manner, no matter how big the dataset, take the same amount of time.
Practical optical computing
Obviously, this could provide some very efficient calculations. The problem at hand then is to encode a problem in a light wave. The light wave must then be broken up into parts—think of passing light through a grid—each of which travels through a circuit with paths that provide the conditions required to perform the computation. After the waves are recombined, the solution to the computation is obtained by measuring the intensity of the light at different points in space.
Doing this with visible light is possible but quite difficult. For each problem, you need to fabricate a circuit to quite high tolerances. Then everything needs to be lined up correctly and tested. Once that is done—think six to 18 months later—you have the solution to your problem within a nanosecond. The problem is that the hardware only solves that specific problem; it seems like a lot of trouble for a single calculation, so these devices are lab curiosities at the moment.
The alternative is to have an object that has a lot of random circuits in it. Think of a sugar cube: a sugar cube is white because each crystal within it reflects and transmits light in random directions. Each crystal face on the outside of the cube represents an entry point to the cube. From the entry point, the light travels a random path until it finally exits. The randomness makes the cube appear white, even though each crystal within it is basically transparent.
Sugar and Wi-Fi
The sugar cube can be used to perform a computation by measuring the path taken by light from each entry point to the cube (or, in practice, the average of groups of entry points). Then, each of these circuits can be combined in the correct way for a computation by controlling where and how light enters the sugar cube.
Computing via sugar crystals is very promising, but there is a downside. The wavelength of light is very short, so if the crystal shifts by even a few nanometers relative to how the light shines on the crystal, the incorrect computation will be performed.
What we really need is something that acts like a sugar crystal but works on a much larger scale and uses longer wavelengths… like a teenager’s bedroom (there is nothing more random in the Universe) and Wi-Fi.
That is exactly what the researchers did. They set up a fake room with random stuff scattered throughout and placed a set of four Wi-Fi antennas in the room. They then measured the transmission characteristics between each of the four antennas, much the same way that beam-forming Wi-Fi nodes do to optimize data rates. The measurement allowed the researchers to understand the different paths the radiation takes and figure out how to perform calculations.
By controlling the amplitude and phase of radiation from each antenna, the researchers demonstrated the ability to perform a computation (in this case a four-bit Fourier transform). Four bits is not much, but it’s a start. And even with the current implementation, bigger problems can be broken up and computed, provided that the problem can be encoded as multiple four-bit computations.
Please keep moving during computation
The problem with Wi-Fi, the researchers found, is that it’s hard to kill. In our example with light, the light was shone into a sugar cube, and the output was measured. All three parts—getting the light in, scattering the light through the crystal, and measuring the light—are independent of each other. The sugar cube isn’t changed by the device that shines light on it. However, the antenna that radiates the Wi-Fi radiation will also scatter that radiation a few nanoseconds later as the radiation spreads throughout the room. In other words, the sending, measuring, and scattering of radiation are interdependent.
The researchers found that the interdependency adds an unknown component to the amplitude measured at each antenna. From an outsider’s perspective, the unknown component appears random. So, the researchers average the unknown contribution away. Instead of each calculation consisting of a single measurement, it consisted of 150 measurements. For each measurement, the room was modified slightly, and the way the light was transmitted was modified. The average of these measurements produced a reasonably accurate computation.
Does not compute?
The idea that I could manipulate my home Wi-Fi network to perform computations that would otherwise be quite time-consuming is really cool—I want one right now. Even the need to average out the influence of the scattering from multiple receivers hasn’t dampened my enthusiasm.
The big issue is that the power of this technique scales with the number of antennas. To perform a one-million-point Fourier transform in one shot, you need… one million Wi-Fi antennas. That seems like a rather big investment. Still, if you, gentle reader, donate them, I’ll get Lee Hutchinson to put them up in his house and perform some arcane computation.
arXiv.org, 2018 arXiv: 804.03860