Thanks for the interest.  As Meikel said, the point is to move the
computation to compile time.  But it goes a little deeper than that.
You can imagine any program as a set of operations on data.  Some of
this data is known at compile time and some is only known at run
time.  Partial evaluation tries to write a simpler, faster program
that can focus on just the remaining run time data.
So (par-eval (+ x (fibb 30))) becomes (+ x 832040)

Right now par-eval will simplify within functions, but the real power
comes from crossing the function boundary (I'm thinking about how to
do it efficiently).

As for Nicolas' concerns about dynamic vars and space I have a mixed
message.  It is not possible to know about any dynamic bindings
established outside the macro, except those established at the top
level.  Since nearly every macro and function is bound at the top
level I must rely on dynamic bindings to make any simplifications.
However, my next version should include some processing of dynamic
bindings within the macro.  If the value of the binding cannot be
determined at compile time, then the var will be marked unavailable
for use within that binding.

At this point, because I am not crossing the function boundary, using
par-eval won't result in an increase of code size.  It is true though
that creating a specialized version of a function for multiple values
of one parameter can result in a much larger (though hopefully faster)
function.

I imagine my cross-functional partial evaluator will create macros
similar to what you'd get from using the clojure contrib template
macros.

Please take a look at the code and let me know what successes and
problems you encounter!

Tim

On Jul 20, 10:46 am, Nicolas Oury <[email protected]> wrote:
> Hi,
>
> it sounds interesting.
> There is some non trivial interactions between lambda-reductions and
> effects/time.
> For example,
> (defn f [x] (+ x x))
>
> (fn [y] (par-eval (f (do increase-a-counter 1)))
>
> will change the semantic of the code.
>
> On a different level,
> if you copy two times a long calculus, the code can get much longer
> (exponentialy sometimes).
>
> How do you plan to handle these situations?
>
> Best regards,
>
> Nicolas.
>
> On Mon, Jul 20, 2009 at 3:01 PM, Meikel Brandmeyer <[email protected]> wrote:
>
> > Hi,
>
> > On Jul 20, 3:48 pm, Mark Volkmann <[email protected]> wrote:
>
> > > I'm trying to understand what's going on here and whether the example
> > > above demonstrates a real speed improvement. Isn't it the case that
> > > fibb30* essentially computes the result before the timer on the last
> > > line begins?
>
> > That's essentially the idea. It moves computation from runtime to
> > compilation time. Eg. a form like (let [x 5] (+ x 2)) can be reduced
> > at compilation time to 7 and be basically replaced with that constant
> > since everything you need to know to calculate the form is known
> > at compile time.
>
> > The C't (a german computer magazine) once made a competition.
> > The "fastest" program was written in C. It basically printed a
> > constant,
> > because the whole computation was done via the pre-processor at
> > compile time. :p
>
> > Sincerely
> > Meikel

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to [email protected]
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to