On Wednesday, 22 October 2014 at 11:10:35 UTC, Ola Fosheim Grøstad wrote:
[snip]

These kinds of optimizations are very difficult to achieve in a low level backend, but you really need them in order to do generic programming properly.

A simple start would be to not provide term rewriting as a language feature, but rather define a vocabulary that is useful for phobos and hardcode term rewriting for that vocabulary.

I think this is feasible.

Term rewriting is very interesting, and I believe some work has been done for this in Haskell. I don't believe anything has been done with this propositions and inference approach you describe. I see a number of problems:

First, annotating this, in the presence of templates, is very hard. Consider:

auto f(alias g, T)(T x) { return g(x); }

We cannot possibly annotate this function with any of propositions you described because we know nothing about g or T. Like purity and nothrow, we'd have to deduce these properties, but most escape deduction in all but the most trivial cases.

Suppose we could deduce a large subset of useful propositions, how does the programmer know what has been deduced? How can I tell what has been deduced and applied without having to disassemble the code to see what's actually going on?

And even if everything is deduced correctly, and I know what's deduced, what if it does a transformation that's undesirable? For example, you changed linear_search to binary_search. What if I knew the element was likely to be at the front and would prefer linear_search to binary_search?

If you have any, I'd love to see some papers on this kind of work.

Reply via email to