Hi,

I wanted to show off dome recent findings. During this summer I have made a
macro package to output a scheme like
language to c-code including simple closures not supporting set! and tail
call. With this tool I made another macro package
to do backtracking using CPS. It all works and is a good tool to investigate
the benefits of JIT and native compilation.

First.  Running the 10-queens example took 0.5s on the VM that is a
baseline. This is 5 times faster then compiled kanren
(using chicken scheme) If i don't remember wrongly. Now this is quite good.
But compiled gprolog takes about 0.14s so there is room to improve.

So I fired up guile and used the macro packages to output to C and use CPS
and tail call's. A now the timing was 0.3s and  (the case when the gc
cleaned up took 0.6s). This is better but calls for improvements.

One can notice that tha lambda can be allocated from the stack in stead from
the heap. So a quick naive implementation
was used and as a result the 10-queens example took 0.14s the same as
compiled gprolog. (Cool CPS as good as compiled gprolog).

Now, here is the interesting bit. Beeing able to mix "scheme" c and prolog
is a interesting. One notice that the most called function can be
deterministically coded without needing backtracking. Implementing this in
the macro package now Cut the timing to 0.052s. This should be a figure of a
lower boundary for the timing.

So, in this example, being able to compile code in guile can save us 5-10x.
Also we should be able in many cases to be faster then pure prolog
implementations. (Note however that pure prolog implementations might be
able to hook in deterministic C functions as well but using scheme macros as
an improved C preprocessor is very powerful)

Have fun
/Strefan

Reply via email to