If you've ever worked in a data center or server room, you know that those places get hot. Many current microprocessors consume enormous amounts of power, and put out correspondingly enormous amounts of heat; as a result, most computer rooms require constant air conditioning. Furthermore, the back-up power supplies required to keep the servers from crashing during a brown-out or black-out are often power hogs themselves, sometimes consuming a third again as much power as they supplied to the computers. System administrators, focused appropriately on making certain that the computers functioned as needed, often only paid attention to power and heat issues when the infrastructure failed.
One of the results of the green building trend has been a re-examination of the heat output and power demands of information technology offices. For desktop systems, this means simple recommendations to turn computers off at night or shutting off monitors, as well as increased reliance on "green computers." But server functions generally don't allow for the machines to be unavailable, and servers are often operated "headless" (without a monitor) anyway. Solutions need to be a bit more sophisticated than that -- but such solutions are, increasingly, available.
A growing trend is the use of "demand-based switching" in chips, where the CPU throttles down its processing speed when not in high demand. Users of laptops know that their systems can be set up to conserve power when running off the battery by trading speed for efficiency; such a trade-off is increasingly available to desktop and server systems. The lower speed is usually invisible to the user, as CPUs tend to spend a good bit of time idle, and much current computer use requires nowhere near the full power of modern CPUs. Processor throttling can make a significant difference; TechTarget reports that demand-based switching in a new version of the Intel Xeon chip reduces power consumption by up to 24% annually.
A somewhat more radical example of the use of laptop-style processors is the "Green Destiny" project at Los Alamos National Laboratory from a couple of years ago. Green Destiny used a cluster of systems with ultra-low-power chips from Transmeta; the cluster used about a third of the electricity of an equivalent-capability standard computing cluster, and put out about a tenth of the heat. Transmeta has been struggling a bit in recent years, but the idea of using groups of lower-power machines to perform high-end server tasks lives on.
Tackling the cooling issue will require more than lower-speed chips -- there are plenty of examples of server functions that simply can't be handled by slowing things down. In those cases, one idea from the mainframe era getting recent attention is water cooling. With water cooling, a "water block" functions like a heatsink, pulling heat away from the CPU; the water is pumped out and cooled via an air radiator. Water cooling can be much quieter than fans and air conditioning, and because water can absorb heat more readily than air, water cooling systems can handle a great deal more heat than traditional air cooling. The first mainstream desktop computer using liquid cooling was Apple's Power Mac G5.
As far as backup power systems are concerned, uninterruptible power supplies are becoming much more efficient. The newest version of the widely-used APC power supplies are 95% efficient, compared to previous power technologies operating about 60-70% efficiency. Notably, the push for increased datacenter energy efficiency is directly connected to the increased visibility of the green building movement.
But these are just baby steps compared to some of the innovations in the pipeline. Both reversible computing and "probabilistic bits" computing have the potential to significantly reduce the power draw of information technology. Reversible computing is based on the idea that heat and energy use actually comes from the entropy produced by the deletion of information; reversible systems don't delete data, and would use a tiny fraction of the power of current technology chips. Probabilistic bit (PBIT) computing takes advantage of the "noise" generated by activity at the quarter-micron level to let chips achieve desired results without having to calculate with absolute certainty; current estimates are a hundredfold reduction in energy use with the PBIT model. Prototype systems have already been designed for both reversible and PBIT computing, and functional examples have even been built of reversible processors.
Both reversible and probabilistic computing will require radical changes in chip architecture and software design, so don't expect to see them at your local computer store any time soon. But we're fast closing in on some fundamental physical limits to conventional computation, and whether the next step is quantum computing, DNA computing, or reversible computing, radical changes will happen.
Such systems will become available over the next five-ten years, most likely, putting them squarely in the uptake period for serious energy efficiency technologies. As a result, we should expect to see them embraced quickly and broadly. That they may be faster or able to perform amazing computational feats will be greatly appreciated, but the real driver to their adoption may well be because of their energy efficiency.