Christopher Smith wrote:
Tracy R Reed wrote:
Andrew Lentvorski wrote:
Also, interrupts on a normal x86 system pass through a bunch of
chips before actually hitting the processor in order to handle
"legacy" issues. Those are normally quite slow, as well.
I think it was you who told me one night at Denny's a few months
ago that an x86 chip spends a third of its power budget in decode
because of how complicated the instruction set is, largely to
legacy issues.
I believe that data is old.
Yes, that data is old. It was from the 1995 timeframe. It was in the
context of placing an x86 decode box in front of a RISC microprocessor.
The x86 decode transistor budget has remain fixed, while CPU
transistor budgets have increased.
I'm not sure that really follows. Most of the transistor budget goes to
caches which really don't burn power unless being activated. A lot of
those cache transistors stay pretty quiet most of the time while the
decode unit is "hot" most of the time.
While the decode power budget has probably come down, it certainly
doesn't come down directly with the number of transistors ratio.
Last time I checked (which was a while ago), it was down to 15%.
In fact, that probably backs up my assertion. In a modern
microprocessor, somewhere around 50% of the power budget goes to simply
powering the clock grid. Your 15% would be about 30% of the remainder.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg