On Thursday, 5 December 2013 at 15:59:08 UTC, Don wrote:
What I said was negligible was:
"The advantage of AST macros is that the compiler doesn't need to re-lex and re-parse the result."

It's a negligible benefit because most of the time is spent in the semantic pass (which can take unbounded time), not in the lexing and parsing steps (which always take time O(n), where n is the length of the source code).

I see. I thought you were saying the concept of not having to rewrite the D compiler was negligible. Which, of course, is false. The speed advantage is but that's really just a minor thing ultimately.

Actually everything can be done in a library. Especially when we switch to the frontend written in D, the library and compiler source can be the same.

Hopefully it'll get us to where we need to go.

But I don't see the point of it being identical to D.

It doesn't _need_ to be, strictly speaking. But having a program written in C, some C++, D, SQL, and some CompileTimeScripting Language is inherently more difficult to deal with than something that limits it to C, C++, and mostly D (that generates necessary SQL and fulfills the requirements needed by the CTS-L). Considering the CTS-L would be unique to D (and possibly even to your project), that's quite an advantage.

Remember that it would have to be "more powerful" than an arbitrary chunk of source code text. I don't see how that could possibly be true.

Why would it have to be? Don't conflate this with the concept of "equivalent to a turing machine". We're not getting it because it'd be possible to change the meaning of 1+1 to 3. If it were more limited ("less powerful") then it would be something that wouldn't have been rejected but, potentially, far less useful.

Reply via email to