Timothy Miller wrote:
Yes, "Reduced Instruction Set Computer" is a bit of a misnomer these
days.  Perhaps it would be better to call them "Simplified Instruction
Set Computers."  Many aspects of the design (not just instruction
decode) are simplified by having completely uniform instruction
formats.  RISC processors were originally designed around the
pipeline.  That's changed a bit, because the instruction sets are now
a bit more of an abstraction from the hardware, but there are still
distinguishing features between RISC and CISC.

Agreed on all counts.

FWIW, I'm largely trying to combat falsehoods here, not trying to argue for a particular design direction.


In the late 80's, RISC was seen as the holy grail since it simplified
processor designs and made room for significant improvements in
performance.  With the dominance of superscalar and OOO designs, that

I think the "simplified processor design" part is the key for RISC. RISC is by far much easier on the hardware designers to design and validate.


simplification is no longer as much of an advantage.  At the same
time, legacy instruction sets like x86 are even more suboptimal.

Justification for the "suboptimal" claim?

IMO, x86-64 ISA seems to most closely match the operations that a compiler wants to generate. It combines the best of RISC (oodles of registers) with an instruction set that matches the basic operations most programs need.


Given the current state of processor designs, can we now design an
instruction set and processor architecture that fits the new model
more directly?

Or course, we may already have those, with names like VLIW and EPIC.

This always sounds good in theory, but you run into a compiler barrier here. ia64 is a really smart, advanced EPIC architecture, but the compiler technology is still trying to catch up.

If the software isn't capable to fully utilizing the hardware, you've just wasted time and money.


This is kinda off topic, but CPU designs have already fascinated me. And I wonder what approach we may take to programmable shaders.

IMO ideally what is needed is practical experience, to answer that question (which again requires time and money). One needs to work inside a feedback loop:

        1. design the hardware, based on guesses
        2. design the shader JIT, based on initial hardware ISA
        3. profile to see where the hardware spends most of its time,
           based on likely-common usage workloads.
        4. update JIT and hardware to reflect profile data
        5. go to step 3, if you have the time and energy.
        6. get some hardware out to the general public
        7. find out all your workload assumptions were wrong,
           and go back to step 3.  :)

So for OGD, I would recommend the open source way: release early, release often. Design a very simple, just-to-get-going GPU that supports programmable shaders. _Just enough_ to get people working on the software. Ignore everyone's opinions on the mailing list [for now]. Then enter the feedback loop...

        Jeff



_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to