On Friday 26 January 2007 19:08, Michael Torrie wrote: > Sure, but this doesn't matter anymore. Let's face it. They used to say > RISC was the wave of the future. And they were partially right. But > now the gap between RISC and CISC is narrowed significantly. Most x86 > chips (AMD or Intel) are really RISC cores with a microcode translator > that converts the more compact CISC instruction sequences into RISC > microcode where it is pipelined, reordered, etc. This gives all the > advantages of RISC without having to actually force RISC ISA on the > compilers and programmers. In effect this means the RISC never really > panned out like everyone thought it would at the higher level. x86_64 > has the advantage of having about twice the code density of a 64-bit > instruction word 64-bit RISC processor. And even though memory and disk > space is cheap, this higher density pays off in terms of increased cache > performance.
Agreed. Today's x86 implementations essentially render the CISC isa as a compression of the RISC microcode to which the decode units translate internally. When I hear the term "RISC-based", I think load-store architecture with a fix-width instruction size. I was merely pointing out that "true RISC-based" (while also a subjective term) still doesn't apply directly to x86_64 as viewed from the outside. > On the flip side, the x86 ISA, 64-bit or not, is old, bloated, and full > of strange anachronisms like memory segmentation and "real mode" garbage. > It's likely you can still boot MS-DOS on an AMD 64-bit machine. However the > x86_64 extensions do give us a path forward and perhaps future cpus can drop > support for older things like real mode, 16 and 32-bit instructions. Who > knows. Maybe they'll also deprecate the direct support for BCD numbers... do any compilers even use it? /* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
