At 07:58 97/08/13, Andreas Doering wrote:
>> Computers (=RISC) are not microprogrammable, because it is slow; instead
>>one relies on better compilers.
>The microprogram is usually stored in a very fast RAM. Now let us have a
>look at a modern
>RISC architecture: it has a very fast RAM, namely the on-chip first level
>cache.
>What is the difference of executing a microprogram consisting of simple
>instructions,
>one of them at every clock cycle and the execution of a RISC program from the
>instruction cache -- none !
>Thus we still have microprogrammable machines, but the microprogram
>changes dynamically.
My point was that old style microprogramming is not needed, as it is
controlled by the compilers, and calling RISC machines microprogrammable
does not bring anything new to this aspect. :-)
>When I heared a course on computer algebra I learned that a nearly optimal
>hardware structure for solving group related problems like coxeter-todd
>coset enumeration
>is very similar to a modern RISC microprocessor.
This sounds to as being a very good example of unstructured, imperative
mathematics. :-) When you have settled for such a specific problem, you
probably do not have to teach the computer what it is doing.
>Remember the fifth generation project. Japanese scientists developed a
>prolog engine.
>In the same time a french enterprise developed a prolog compiler for the
>good old
>68000.
>Guess which system was faster, and which system was cheaper.
Again, it is very difficult to say which one is objectively better,
because mass-consumption makes possible to put an a big amount of
development, and that can easily out-weigh any long term advantages.
>The versatility of modern architectures allows the application of complex
>and fast
>algorithms (e.g. for garbage collection). In hardware one normally applies
>more simple
>algorithms which can be executed faster. It turned out that in many cases
>the better
>algorithms made it.
Well, it is totally unthinkable implementing a program where each
operation is carried out by a parallel thread, but this how the real world
works. Todays computers are good at handling untyped information, and in
order to get efficiency, typing has to be thrown away.
>I guess that an intensive investigation of hardware support for functional
>programmed systems
>will show that there is very little that could be added in hardware. And
>that this little
>thing does not yield very much.
And the same applies to this: It takes longer time to copy a dynamic
structure, which may be spread on many memory locations, than a static,
which is in a memory block, for example.
So, I foresee the opposite evolution, where the insights from parallel,
functional, and dynamic programming successively moves into the hardware
structure. Getting a good design for a bytecode that can be used both with
Haskell and Erlang, for example, might help putting such a development
forward.
Just an opinion. :-)
Hans Aberg