On 17.10.2011 01:43, foobar wrote:
foobar Wrote:

Don Wrote:

You're assuming that the compiler can run the code it's generating. This
isn't true in general. Suppose you're on x86, compiling for ARM. You
can't run the ARM code from the compiler.


This is quite possible in Nemerle's model of compilation.
This is the same concept as XLST - a macro is a high level transform
from D code to D code.

1. compile the macro ahead of time into a loadable compiler
module/plugin.
the plugin is compiled for the HOST machine (x86) either by a separate
compiler
or by a cross-compiler that can also compile to its HOST target.

YES!!! This is the whole point. That model requires TWO backends. One
for the host, one for the target.
That is, it requires an entire backend PURELY FOR CTFE.

Yes, of course it is POSSIBLE, but it is an incredible burden to place
on a compiler vendor.

How does that differ from the current situation? We already have a separate 
implementation of a D interpreter for CTFE.

That's true, but it's quite different from a separate backend.
The CTFE interpreter doesn't have much in common with a backend. It's more like a glue layer. Most of what's in there at the moment, is doing marshalling and error checking. Suppose instead, you made calls into a real backend. You'd still need a marshalling layer, first to get the syntax trees into native types for the backend, and secondly to convert the native types back into syntax trees. The thing is, you can be changing only part of a syntax tree (just one element of an array, for example) so if you used a native backend, the majority of the code in the CTFE interpreter would remain.

Yes, there is an actual CTFE backend in there, but it's tiny.

I disagree with the second point as well - Nothing forces the SAME compiler to 
contain two separate implementations as is now the case.
In Fact you could compile the macros with a compiler of a different vendor. 
After all that's the purpose of an ABI, isn't it?
In fact it makes the burden on the vendor much smaller since you remove the 
need for a separate interpreter.

I forgot to mention an additional aspect of this design - it greatly simplifies 
the language which also reduces the burden on the compiler vendor.
You no longer need to support static constructs like "static if", CTFE,  is(), 
pragma, etc. You also gain more capabilities with less effort,
e.g you could connect to a DB at compile time to validate a SQL query or use 
regular IO functions from phobos.

Statements of the form "XXX would make the compiler simpler" seem to come up quite often, and I can't remember a single one which was made with much knowledge of where the complexities are! For example, the most complex thing you listed is is(), because is(typeof()) accepts all kinds of things which normally wouldn't compile. This has implications for the whole compiler, and no changes in how CTFE is done would make it any simpler.

> e.g you could connect to a DB at compile time to validate a SQL query or use regular IO functions from phobos.

You know, I'm ýet to be convinced that that it's really desirable. The ability to put all your source in a repository and say, "the executable depends on this and nothing else", is IMHO of enormous value. Once you allow it to depend on the result of external function calls, it depends on all kinds of hidden variables, which are ephemeral, and it seems to me much better to completely separate that "information gathering" step from the compilation step. Note that since it's a snapshot, you *don't* have a guarantee that your SQL query is still valid by the time the code runs.

BTW there's nothing in the present design which prevents CTFE from being implemented by doing a JIT to native code. I expect most implementations will go that way, but they'll be motivated by speed.

Also I'm confused by this term "macros" that keeps popping up. I don't know why it's being used, and I don't know what it means.

Reply via email to