On 4/16/06, Jeff Garzik <[EMAIL PROTECTED]> wrote:

>
> Not really.  This entire thread oversimplifies the differences.
>
> x86 is a vastly different beast from traditional RISC.  Further, modern
> production RISC processors sometimes approach CISC in their complexity.
>   See e.g. the 'sqrt' instruction on any modern RISC processor.
>
> And then there's super-scalar execution differences, out of order
> execution, vastly different TLB behavior, ...

Yes, "Reduced Instruction Set Computer" is a bit of a misnomer these
days.  Perhaps it would be better to call them "Simplified Instruction
Set Computers."  Many aspects of the design (not just instruction
decode) are simplified by having completely uniform instruction
formats.  RISC processors were originally designed around the
pipeline.  That's changed a bit, because the instruction sets are now
a bit more of an abstraction from the hardware, but there are still
distinguishing features between RISC and CISC.

In the late 80's, RISC was seen as the holy grail since it simplified
processor designs and made room for significant improvements in
performance.  With the dominance of superscalar and OOO designs, that
simplification is no longer as much of an advantage.  At the same
time, legacy instruction sets like x86 are even more suboptimal. 
Given the current state of processor designs, can we now design an
instruction set and processor architecture that fits the new model
more directly?

Or course, we may already have those, with names like VLIW and EPIC.

This is kinda off topic, but CPU designs have already fascinated me. 
And I wonder what approach we may take to programmable shaders.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to