Yes, indeed.  And since GPUs have higher compute throughput per unit area,
it stands to reason that they have higher dynamic power per unit area.
 Depending on the tech and other factors, dynamic power for CPUs is
anywhere from 50% to 85% of total power.  (Actually, I think leakage is
lower with Intel's FinFETs.)

I was looking this up.  The "maximum" power for a PCIe card with external
power connectors, according to ATX, is 300W.  But some graphics cards push
it to 450W with more power connectors.  I'm not sure how many GPU chips are
on the same board, but this is some serious power consumption.  No wonder
GPU cooling solutions look so much more sophisticated.  I wonder what is
the proportional cost of all that copper in the heat sink.

If someone were to develop a way to make GPUs more energy efficient per
unit of work, mostly what would happen is that they'd get faster in the
same power envelope.

Also, ATI and nVidia are motivated by revenue, strongly affected by
competition.  Consumer pressure is towards more FPS and more realism, which
translates to higher performance.  Although both companies have "mobile"
solutions, their flagships will always be the most power-hungry computing
devices you can buy.

Intel competes against ARM by having higher peak performance, while ARM
competes against Intel by having higher performance per Watt.  In the GPU
space, PowerVR is ARM and nVidia and AMD are Intel.  Our goal should be to
compete (at least in an abstract sense) with PowerVR.

On Tue, Nov 13, 2012 at 7:09 AM, <[email protected]> wrote:

> Le 2012-11-13 08:50, Dieter BSD a écrit :
>
>  Nvidia and ATI must be reading our list and decided that they couldn't
>> compete with Timothy's efficient "Prius" GPU design, so they are
>> building the Hummers of GPUs.
>>
>> Nvidia: 7.1B transistors
>> ATI: 375 Watts
>>
>>
>> http://hardware.slashdot.org/**story/12/11/13/014241/nvidia-**
>> and-amd-launch-new-high-end-**workstation-virtualization-**and-hpc-gpus<http://hardware.slashdot.org/story/12/11/13/014241/nvidia-and-amd-launch-new-high-end-workstation-virtualization-and-hpc-gpus>
>>
>> I wonder how much it costs to run a liquid helium cooling setup?
>> That's about what it would take to cool these monsters.
>>
>
> I'm even more concerned by 2 things :
>
>  * Dark silicon : powering all the gates must be an extreme challenge
>
>  * Memory bandwidth wall : the chip can't have enough I/O to feed all the
> units with data
>      (huge amounts of ondie cache are necessary but can't solve everything)
>
>
> ______________________________**_________________
> Open-graphics mailing list
> [email protected]
> http://lists.duskglow.com/**mailman/listinfo/open-graphics<http://lists.duskglow.com/mailman/listinfo/open-graphics>
> List service provided by Duskglow Consulting, LLC (www.duskglow.com)
>



-- 
Timothy Normand Miller, PhD
Assistant Professor of Computer Science, Binghamton University
http://www.cs.binghamton.edu/~millerti/<http://www.cse.ohio-state.edu/~millerti>
Open Graphics Project
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to