From: Tracy R Reed <[EMAIL PROTECTED]>
I think it was you who told me one night at Denny's a few months ago that an x86 chip spends a third of its power budget in decode because of how complicated the instruction set is, largely to legacy issues.

It also takes the largest time to engineer, according to friends at Intel.


I know the MMU is horrendously complicated also due to memory issues. When I was studying x86 assembly back in the day we used segment and offset and ran our code under DOS. About a year ago I read "Programming From The Ground Up" (which I highly recommend, took me a week or so of evenings to read through it) and it covers how assembly works on x86 in a Linux environment and the memory model it runs in didn't even exist the last time I coded assembly.

The memory model it uses existed in at least the first pentium, and possibly the 486 as well. Segment:offset addressing hasn't been the only method of addressing on x86 for a while. As an aside, it isn't a bad way of doing addressing. ALthough Windows and Linux don't use it, segmented memory has some advantages over paging.

I really hope either Intel does some housecleaning on the x86 and breaks backwards compatibility or some other cheap and powerful cpu comes along (which doesn't seem likely short of a technological miracle due to the cheapness of x86 being due to mass production).

Never going to happen. The main problem with decode on x86 isn't the number of instructions, or the multiple addressing modes. Its the layout of the opcodes. Opcodes are variable sized, and have little to no logical layout. You can't just grab 4 bytes ala MIPS for an instruction- it can be anywhere from 1 to 7 bytes. Maybe more. Its optimized for a time when memory was expensive compared to processing power. For extra fun, the opcodes are in no logical order, so you can't do something sane like saying the Nth bit means its a memory write, there's few to no such patterns.

And you can't just change opcodes. The opcodes are the assembler, either ceasing to support some, or rearranging them would break every program ever compiled for x86. So until a new architecture arrives, we're stuck. And seeing how x86-64 is kicking IA64's ass, I don't expect that to happen anytime soon (of course IA64 helped by relying on VLIW, and VLIW is not ready for prime time).



Perhaps with massively multicore stuff coming along we can eventually dedicate a whole core to interrupt processing which seems like it should really improve things.

Not really. You might be able to put 1 core on interrupts, but it would end up being idle most of the time. And the other cores would likely still need interrupts, at the very least a timer interrupt for process switching. But really, interrupts are a very low occurrence task for most PCs, dedicating a core to them is a waste of resources.

Gabe

_________________________________________________________________
Get a preview of Live Earth, the hottest event this summer - only on MSN http://liveearth.msn.com?source=msntaglineliveearthhm

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to