Just a bit of a tutorial on reductions. Consider this reduction:
reduce revrev[T] (x:list[T]): rev (rev x) => x; It's obvious what this rule says: there's no point reversing a list twice. Now, no programmer would actually write that. However consider this situation: we have a function rev_map which maps a function over a list and returns a reversed list. This is the natural (fast) function, since the mapping starts at the head of the list and proceeds down the tail, so the resulting list will be backwards. Now let us implement two new functions: fun map_rf ... => rev_map ... (rev x) fun map_rl .. => rev (rev_map ... x) These both produce the same result at the same cost: the first one reverse the argument then maps it, the second does the mapping first then reverses the result. Now suppose we write: var y = map_rf .. (map_rl ... x); Weird but it could happen. And now suppose we INLINE the code, so we get rev_map ... (rev (rev (rev_map ... x))) then you can see the reduction, if applied would lead to: rev_map .. (rev_map ... x) In other words the main use of reductions is to simplify code AFTER inlining. Note, that to be effective, it has to be after EACH inlining step, since the pattern to be recognised may be eliminated by further inlining. And it is not just inlining which can destroy patterns suitable for reductions, monomorphisation can do that too. And usually will: only monomorphic reductions such as: reduce revrev (x:list[int]): rev (rev x) => x have any chance of working (and this one won't in the current compiler anyhow). Testing for possible reduction is expensive in the extreme: it's worse than exponential time. Not only does every term and subterm have to be checked, and checked against every possible reduction, this has to be done every time a new term is synthesised, for example by inlining or specialisation. Apart from being expensive and unreliable, if the user doesn't ensure reductions converge, it can lead to an infinite loop unless artificially restricted. Unfortunately I see no easy way to monomorphise polymorphic reductions: one would basically have to try to specialise them for EVERY possible specialisation (which is actually done in the program). SO .. I am thinking to just get rid of reductions, or at least not bother to apply them, or perhaps better, apply them in limited circumstances, such as just before monomorphisation. The problem is that with the new compiler design the monomorphisation is done first. To explain: the existing code first does optimisations, including a convolution of inlining and reductions. By convolution I mean the reductions are applied in many places in the hope they might do something. Then the code is monomorphised and the whole optimisation process is applied again. So everything is optimised twice, once polymorphically, and then again monomorphically. The main reason for this is that typeclasses and abstract types hide optimisation opportunities (such as inlining and reductions). Type class virtual function calls are eliminated by the optimisation and monomorphisation passes too, but the optimisations like inlining may not be performed on the first optimisation pass because the replacement may occur too late. So it is all done again. With the new design, the monomorphisation is done first and all virtual functions eliminated so the inlining pass can skip virtual function substitution because all the virtuals are gone. And there's then no need for a second optimisation pass. In addition the code, currently polymorphic, could be simplified by eliminating the type specialisation bits because all the type variables are also gone. The result should be almost as efficient at run time and make the compiler run a bit faster. -- john skaller skal...@users.sourceforge.net http://felix-lang.org ------------------------------------------------------------------------------ _______________________________________________ Felix-language mailing list Felix-language@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/felix-language