[EMAIL PROTECTED] wrote:

Leo --

... Optable build time is not a function of program size, but rather of optable size

Ok, I see that, but ...


I don't think it remains a problem how to run ops from different oplibs _fast_.

.... the problem is, that as soon as there are dynamic oblibs, they can't be run in the CGoto core, which is normally the fastest core, when executions time is depending on opcode dispatch time. JIT is (much) faster, in almost integer only code, e.g. mops.pasm, but for more complex programs, involving PMCs, JIT is currently slower.

... Op lookup is already fast ...

I rewrote find_op, to build a lookup hash at runtime, when it's needed. This is 2-3 times faster then the find_op with the static lookup table in the core_ops.c file.


... After the preamble, while the program is running, the cost of having a dynamic optable is absolutely *nil*, whether the ops in question were statically or dynamically loaded (if you don't see that, then either I'm very wrong, or I haven't given you the right mental picture of what I'm talking about).

The cost is only almost *nil*, if program execution time doesn't depend on opcode dispatch time. E.g. mops.pasm has ~50% execution time in cg_core (i.e. the computed goto core). Running the normal fast_core slows this down by ~30%.

This might or might not be true for RL applications, but I hope, that the optimizer will bring us near above relations for average programs.

Nethertheless I see the need for dynamic oplibs. If e.g. a program pulls in obsure.ops, it could as well pay the penalty for using these.


Regards,

-- Gregor

leo


Reply via email to