[This is short, since it's a bit late for me. A full, non-cranky,
response'll come tomorrow]

On Mon, 8 Oct 2001, V Vinay wrote:

> It is reasonable to expect these two quantities to be within a factor of 2
> or thereabouts.  This measure directly says something about the quality
> of the instruction dispatcher.

Reasonable on the surface, yes. Empirical evidence seems to point to
around a factor of 10 slowdown when running an interpreter, however.  
Knowing how most CPUs work this doesn't surprise me. Dispatching
indirectly, by any means you choose, generally kills prediction on CPUs,
causing pipeline flushes, prefetch misses, and other performance
killers. The only reason things aren't worse is the size of L1 cache. If
we missed that, well, we might as well pack it in.

Also, don't underestimate the speed hit of uncertainty. When the C version
of a program is compiled, the code generator *knows*, with full
certaintly, what the code looks like. That, alas, is not the case when
running interpreted code. Even a little peephole optimization buys you a
lot.

                                        Dan

Reply via email to