Slashdot says, "Nvidia said this week it got a contract worth up
to $20 million from the Defense Advanced Research Projects Agency
to develop chips for sensor systems that could boost power output
from today's 1 GFLOPS/watt to 75 GFLOPS/watt." [1]

One of the comments points to an article on Koomey's law [2].

"the electrical efficiency of computing has doubled every 1.6 years
since the mid-1940s"

I'd like to see the raw data behind that graph.

"The Utility Investor newsletter declares that we now use more
electricity for computing than we do in auto manufacturing."

"The concept of computing efficiency, defined in this article as the
number of computations per KWh, is not very useful without some
reference to the speed of the calculations. I'm not sure but one
can probably design computing platforms that work some factor k
more slowly while using more than k times less energy, thus
increasing the computing efficiency arbitrarily high while
accomplishing very little in a given amount of time."

Which has been the trend with CPUs. Clock rate isn't ramping up much,
instead we get more "cores" and must find ways to use multiple cores
in parallel. Presumably the same concept applies to GPUs. Do as much
in parallel as possible, and keep the clock rate down.

Given that this is taxpayer money, the results should be made available
to the public, including full documentation of how to program any
resulting chips. And any patents/copyright should be owned by the public.

[1] 
http://hardware.slashdot.org/story/12/12/15/0427232/nvidia-wins-20m-in-darpa-money-to-work-on-hyper-efficient-chips
[2] http://www.economist.com/blogs/dailychart/2011/10/computing-power
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to