On Tue, Mar 13, 2018 at 06:51:01AM +0000, Dmitry Olshansky via Digitalmars-d 
> While writing Pry I soon come to realize that using types to store
> information is a dead end. They are incredibly brittle esp. once you
> start optimizing on them using operations such as T1 eqivalent to T2,
> where T1 != T2.
> And slow. Did I meantioned they are slow? And ofc 64kbyte symbols that
> make no sense anyway.
> A better approach for cases beyond a handful operators is so called
> “staging” - 2 stage computation, where you first build a blueprint of
> operation using CTFE. Secondly you “instantiate it” and apply
> arbitrarry number of times. Optimization opportunities inbetween those
> stages are remarkable and much easier to grasp compared to “type
> dance”.
> For instance:
> enum Expr!double bluePrint = factor!”a” ^^ factor!”b” % factor!”c”;
> Where that Expr is eg a class instance that holds AST of operaton as
> plain values.
> Now usage:
> alias powMod = bluePrint.instantiate; // here we do optimizations and
> CTFE-based codegen
> powMod(a,b,c); // use as many times as needed

Wouldn't CTFE-based codegen be pretty slow too?  Until newCTFE is
merged, it would seem to be about as slow as using templates (if not

> Notation could be improved by using the same expression template idea
> but with polymorphic types at CTFE.
> Another thing is partial specialization:
> alias squareMod = bluePrint.assume({ “b” : 2 }).instantiate;
> Now :
> squareMod(a,c); // should be faster the elaborate algorithm

I think the general idea is a good approach, and it seems that
ultimately we're just reinventing expression DSLs.  Overloading built-in
operators works up to a point, and then you really want to just use a
string DSL, parse that in CTFE and use mixin to codegen.  That frees you
from the spaghetti template expansions in expression templates, and also
frees you from being limited by built-in operators, precedence, and


Public parking: euphemism for paid parking. -- Flora

Reply via email to