that sounds sensible; but let me poke at the "havoc" thing a bit... I have heard this stated informally several times. Is there some source of related measurement information? Given that inline caching was introduced to improve performance (and is still in use), it would be interesting to see some actual benchmark results that nail this down.
My knowledge of CPU (hardware) memory caches comes Ulrich Drepper's paper on the topic:

        http://people.redhat.com/drepper/cpumemory.pdf

There are probably other papers out there more specific to implementing very late bound languages, but this isn't an area I've looked at much.


Related question: does threaded interpretation still make sense these days, what with all those sophisticated branch prediction units around? Again: are there reliable sources?
Oh, by "multi-threaded" I meant multiple threads of execution running the same machine code (as in POSIX threads), not threaded interpretation (as in one of the ways to implement Forth like languages). If your inline caches are changing the actual machine code, and multiple threads are executing the same machine code, you can end up with race conditions if your not very careful.

But you might find the answer to your question in Anton Ertl (and others have done):

     http://www.complang.tuwien.ac.at/projects/forth.html



Your suggestions sound worthwhile, thanks a lot; I will have a look at the places in the source code you mentioned. It seems you have forgotten that "actives" example you announced, though. ;-)

Yes, I forgot, sorry.  I've sent another note about actives.

-gavin...



_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to