On 5/24/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
By changing inteface to better suit the implementation (as RISC designers did) you reduce energy spent to develop something new. By locking inteface you lock out a good part of you total energy.
That was the 80's. Even now, RISC is somewhat passe in academic circles, because of its own inherent drawbacks. For one thing it has code size issues that people would like to overcome. But the bigger problem is this: When RISC processors were first developed, they tailored the instruction set to the underlying architecture, which made it very efficient. But some of the design decisions that made sense then (like branch delay slots) became a liability later when they decided to extract more performance using superscalar and superpipelining techniques. Today, if you want to design a fast processor that's special purpose, the favored approach seems to be VLIW, although that has problems worse than RISC. But when it comes to general purpose processors, we're seeing that the translation from the ugly x86 ISA is becoming a smaller and smaller part of the logic on the chip. The more compact instruction set requires smaller instruction caches for the same performance, with few of the drawbacks, because the decoder can be smart, combining and splitting x86 instructions as necessary. Back in the 60's and 70's, the approach to designing processors was to develop an instruction set and then design hardware around it. The result is things like the VAX which has completely orthogonal addressing modes, which is a total waste since only 3 or 4 of them actually see much use. In the 80's, there was a "revolution" of thought in that area, where instruction sets were designed around the hardware. Today, with superscalar and other sorts of multi-issue super-pipelined designs, we're starting to return to a more abstract view of instruction sets. In my opinion, the holy grail right now would be an instruction set that is based on profiling of thousands of existing applications, taking the top few percent of instructions and filling in the rest for completeness. (Since the apps would be biased by the architectures they were compiled for, this would have to be an iterative process.) Then different underlying architectures could be designed around this more abstract ISA. It would certainly be easier to translate than x86. Now back to your regularly scheduled OGP discussion. :) _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
