Vesa Karvonen:
David Abrahams:
Vesa Karvonen:
The estimate that a nearly optimal interpreter might be just 2x-10x
slower than the language in which the interpreter is implemented is
based on the simple observation that such numbers hold for modern
interpreters.

Heh, I'm having trouble with that extrapolation.

Well, I certainly hope that my extrapolation would not be too inaccurate.

I have now implemented a subset of the language I described in my previous post. It turns out that my extrapolation is not completely bogus. It is definitely possible to get reasonable performance from the interpreter. This is especially true when using built-in operators of the language. For example, arithmetic is basically just as fast in the interpreter as in the Boost.PP library (using BOOST_PP_[ADD|SUB|MUL]). Of course, it is possible to implement faster arithmetic on the preprocessor than the current Boost.PP implementation. Even with much faster arithmetic, interpretation speed, for simple arithmetic, should still be well within the 2x-10x range.


For one thing, the languages in which most interpreters are implemented
are specifically chosen for their ability to express the efficient and
low-level operations needed to make the interpreters go fast -- a property
which is manifestly not applicable to the preprocessor.

[..*relative speed*..]

Actually, it is true that the C preprocessor has been intentionally crippled, but it has the basic characteristics of an almost ideal language for the implementation of *embedded* interpreters. The main reason is that such an interpreter can relatively easily manipulate C preprocessor data (including macro identifiers and macro parameter lists). Memory management is fully automatic. It is also relatively easy to express limited pattern matching using the C preprocessor, which makes the manipulation of algebraic or inductive data types, such as syntax trees, relatively easy.


...

An interesting feature of the interpreter is that it implements a wide range of control structures. First of all, the interpreter is essentially tail-recursive, which means that recursive loops do not "blow the stack". Like the previously implemented evaluators for the lambda-calculus (BTW, I now have much more efficient versions of those evaluators than the versions I posted earlier to the list), the interpreter also has lexical binding and anonymous functions. The interpreter has a special form 'exit(expr)', which immediately stops the interpreter and expands to the evaluated value of 'expr'. ('exit' can be used for producing readable error messages when a program detects an error (e.g. in an assertion).) The interpreter also directly supports continuations with the special form 'call_cc(f)' (exactly like call/cc in Scheme). I'm currently finishing the implementation of direct support for exception handling (i.e. 'try(expr,handler)' and 'throw(expr)' special forms). The interpreter also has a special form 'eval(expr)' (exactly like eval in Scheme).

It has been very easy to implement these features in the interpreter. For example, the support for continuations requires only a couple of macros - and it has zero performance hit on programs that do not use continuations. The same holds for exception handling. It would probably be practically impossible to provide similar constructs (e.g. exit, exception handling and continuations) in full generality in a C preprocessor combinator library like Boost.PP.

-Vesa


_________________________________________________________________
Protect your PC - get McAfee.com VirusScan Online http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to