Right, so, yes .. I should blog this .. yes, i've set up a blog thing on Blogger
as an experiment (that's a Google system .. hopefully they can index their
own blogs .. ) .. no it's not ready for prime time but the blog name is 
"skalrants" :)
[I may add a separate blog focussed on Felix. I don't use the sign in account
on Google for real email or use Google+ so MMMV]

So, a reduction is something like this:

reduce mapmap (x:list[int], f1:int->int, f2:int->int) :
   map f1 (map f2 x) => map (f1 \circ f2) x
;

where \circ is the new symbol for forward function composition.

This reduction says: whenever you see code like:

var x3 = map (add 1) (map (add 1) x1);

replace it with this:

var x3 = map add2 x1;

where

        fun add2 (x:int) => ((add 1) \circ (add 1)) x;

which just means:

        fun add2 (x:int) => add 1 (add 1 x);

The effect of the reduction is to replace two maps on a list,
each producing a new list, with a single map combining the
mapping functions. This basically halves the amount of time
taken assuming memory management of the lists dominates
the computation.

Lets go back and read the rule out:

        For all lists of ints,
 
        If you see two maps applied to such a list,
        with arguments f1, f2,

        replace the maps with a single map with
        argument the composition of the functions.


The rule as stated is an axiom, however unlike a normal axiom,
reductions are operational. As noted previously they're expensive
to apply so I may beed to add a switch to the compiler to turn them
on or off. this would also be useful for checking if they're actually
being applied (by performance measurement), without which testing
is impossible, since reductions are meant to be performance
improvements which don't impact semantics.

Care must be taken with reductions you dont end up with
an infinite loop: all reduction chains should terminate
in an irreducible pattern. It is NOT possible to enforce
this in the compiler (the problem is known to be undecidable).

Haskell also supports reductions.

Reductions don't always "take". As you can see from the limited
syntax at the moment they also only apply to expressions:
you cannot match, for example, sequences of statements.

Reductions use lazy evaluation, that is, substitution. However this
is easy to "fix" by simply reducing to a function call. Note the LHS
of a reduction is called a "Redux".

One of the problems with reductions is premature optimisation.
For example suppose:

        fun map[T] (f:T->T) (x:list[T]) => rev (revmap f x);

This says, to do a map, do a reverse map first, then
reverse the result. This is a common implementation because
revmap in eager languages is tail recursive whereas map is not.
The function above is a programmed reduction, that is, it's a reduction
optimisation which is applied manually, that reduces the non-tail 
operation of a naive map implementation into  two tail recursive
implementations -- at the cost of creating a temporary list on the heap
instead of on the stack.

Now, the problem is that if we substitute this function first,
we have eliminated the "map" function which the reduction is 
looking for: the reduction will only work if it  is applied first.

On the other hand this works:

val x2 = map (add 1) x1; 
var x3 = map (add 1) x2; 

but only because the optimise replaces x2 with its definition
before the reduction scan is done. Actually Felix applies reduction
searches multiple times during optimisation, but it doesn't
do them for *every* change in any term (its not even clear what
that means since instructions lists are optimised by a functional
method so you just get inputs and outputs, no "changes" as such).



So now the issue becomes: give a large set of possible reductions, which 
always exist on data types like list, matrix, etc, which ones do you code
as reductions (applied automatically), which as axioms (never applied
but document semantics) or as ordinary functions (have to be applied
by hand)??

For lists, there are lots of well known reductions.
Of course there's 

        x.rev.rev => x

A more famous one is "map-reduce":

        reduce mapreduce[T,U,V]
         (
                x:list[T], // the list 
                init:V,   // init value of accumulator
                m:T->U,  // mapping function
                f: V -> U -> V // accumulation function
        ) :
        fold_left f init (map m x) => 
                fold_left (fun (acc:V, elt:T) => f acc (m elt)) init x
        ;

This is the same in principle as our mapmap: just apply
a composite fold made from the original map function m
and the original accumulator function f.
        
This eliminates building an extra list. Of course there's a fold_right
too .. things are getting complex now .. :-)

A fold right can be replaced by a fold left if you can "reverse" the
function. If the function is associative and commutative (such as addition)
you can just replace fold_right with fold_left. 

At this stage Felix cannot provide a way to tell if a function is associative
or commutative. Even if the type is in a type-class for which this is the
case!

The workaround here is simple: put the reduction in the type-class.
If it's based on a type parameter, it can only be applied to instances
of the type class.

--
john skaller
skal...@users.sourceforge.net
http://felix-lang.org




------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to