On 10.11.2011 06:43, foobar wrote:
Don Wrote:
On 09.11.2011 16:30, foobar wrote:
Don Wrote:
On 09.11.2011 09:17, foobar wrote:
Don Wrote:
On 07.11.2011 14:13, Gor Gyolchanyan wrote:
After reading
http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.
This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.
First comment: classes and exceptions now work in dmd git. The remaining
limitations on CTFE are intentional.
With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.
version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}
I think we discussed those issues before.
1. size_t.sizeof:
auto a = mixin("size_t.sizeof"); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU
That doesn't work. mixin happens _before_ CTFE. CTFE never does any
semantics whatsoever.
If I wasn't clear before - the above example is meant to illustrate how
multilevel compilation *should* work.
Sorry, I don't understand what you're suggesting here.
If you want, we can make it even clearer by replacing 'mixin' above with
'macro'.
That doesn't help me. Unless it means that what you're talking about
goes far, far beyond CTFE.
Take std.bigint as an example, and suppose we're generating code for ARM
on an x86 machine. The final executable is going to be using BigInt, and
CTFE uses it as well.
The semantic pass begins. The semantic pass discards all of the
x86-specific code in favour of the ARM-specific stuff. Now CTFE runs.
How can it then run natively on x86? All the x86 code is *gone* by then.
How do you deal with this?
What I'm suggesting is to generalize C++'s two level compilation into arbitrary
N-level compilation.
The model is actually simpler and requires much less language support.
It works like this:
level n: you write *regular* run-time code as a "macro" using the compiler's
public API to access its data structures. You compile into object form plug-ins loadable
by the compiler.
level n+1 : you write *regular* run-time code. You provide the compiler the
relevant plug-ins from level n, the compiler loads them and then compiles level
n+1 code.
of course, this has arbitrary n levels since you could nest macros.
So to answer your question, I do want to go far beyond CTFE.
The problem you describe above happens due to the fact that CTFE is *not* a
separate compilation step. With my model bigint would be compiled twice - once
for X86 and another for ARM.
OK, that's fair enough. I agree that it's possible to make a language
which works that way. (Sounds pretty similar to Nemerle).
But it's fundamentally different from D. I'd guess about 25% of the
front-end, and more than 50% of Phobos, is incompatible with that idea.
I really don't see how that concept can be compatible with D.
As far as I understand this scenario is currently impossible - you can't use
bigint with CTFE.
Only because I haven't made the necessary changes to bigint.
CTFE-compatible code already exists. It will use the D versions of the
low-level functions for ctfe. The magic __ctfe variable is the trick
used to retain a CTFE, platform-independent version of the code, along
with the processor-specific code.
I chose the BigInt example because it's something which people really
will want to do, and it's a case where it would be a huge advantage to
run native code.