Let me just check my understanding: If a function says it returns a thing of type T, it really does return something whose outermost shape is T; however, if it contains pointers to other things, and these were stack allocated, the pointers might be readdressed.

@Bearophile: in your example, why is the array heap allocated? For arrays do you not need to use new?

From the documentation:
"BUGS:
Currently, Algebraic does not allow recursive data types."
... So maybe in the future, I can refactor to that.

It makes sense that union is not type safe. If I have a struct like this

struct F {
  enum Case {case1, case2}
  Case case;
  int x;
  string y;
  this(int x_in)
  {
    x = x_in;
    case = case1;
  }
  this(string y_in)
  {
  y = y_in;
  case = case2;
  }
}

That seems like a bad practice leaving one of the fields uninstantiated. Is this is a sign that I should be using an object oriented approach, or is there a way to clean this up.

I have to admit, I don't understand the mixin/template stuff right now. However the mixin ADT thing seems pretty sexy, so it might be a good idea to learn enough to understand what is going on there. The problem I have with this is if it ends up describing a struct in the background, will I have to keep a bunch of conventions straight in my head, or are there lots of utilities for working with this kind of thing (i.e. can I do a case operation, and recurse on subterms)? Are templates considered a good practice in D?

Also, would

mixin ADT!q{ Term: Var char | Op char Term[] | Ptr Term*};

be considered valid. If so, then it would allow me to create a term t get its pointer, p, and then have
  Op 'g' (Ptr p, Ptr p)
so that in rewriting g(t,t), I only need to rewrite t once.

Suppose a seasoned D programmer were thinking about this problem: would (s)he opt for an object oriented approach or the use of structs. The main point of this data structure is to implement term rewriting. There will probably be a lot of object creation -- especially in building and applying substitution lists. I don't see any real benefit of one of the other for this application.

I tend not to worry too much about being performance critical i.e. cutting corners to shave off constants at the expense of losing safety ... I tend to prefer a simpler approach as long as I can guarantee that the big-O is the same -- however, I try to avoid even logarithmic "blowups" in comparable approaches ...

Reply via email to