On 09.11.2011 16:30, foobar wrote:
Don Wrote:

On 09.11.2011 09:17, foobar wrote:
Don Wrote:

On 07.11.2011 14:13, Gor Gyolchanyan wrote:
After reading

       http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
       https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.

First comment: classes and exceptions now work in dmd git. The remaining
limitations on CTFE are intentional.

With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.

version(ARM)
{
      immutable X = armSpecificCode(); // you want to run this on an X86???
}


I think we discussed those issues before.
1. size_t.sizeof:
auto a = mixin("size_t.sizeof"); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU

That doesn't work. mixin happens _before_ CTFE. CTFE never does any
semantics whatsoever.


If I wasn't clear before - the above example is meant to illustrate how 
multilevel compilation *should* work.

Sorry, I don't understand what you're suggesting here.

If you want, we can make it even clearer by replacing 'mixin' above with 
'macro'.
That doesn't help me. Unless it means that what you're talking about goes far, far beyond CTFE.

Take std.bigint as an example, and suppose we're generating code for ARM on an x86 machine. The final executable is going to be using BigInt, and CTFE uses it as well. The semantic pass begins. The semantic pass discards all of the x86-specific code in favour of the ARM-specific stuff. Now CTFE runs. How can it then run natively on x86? All the x86 code is *gone* by then. How do you deal with this?



2. version (ARM) example - this needs clarification of the semantics. Two 
possible options are:
a. immutable X = ... will be performed on TARGET as is the case today. require 
'mixin' to call it on HOST. This should be backwards compatible since we keep 
the current CTFE and add support for multi-level compilation.
b. immutable x = ... is run via the new system which requires the function 
"armSpecificCode" to be compiled ahead of time and provided to the compiler in 
an object form. No Platform incompatibility is possible.

I don't see any problems with cross-compilation. It is a perfectly sound design 
(Universal Turing machine) and it was successfully implemented several times 
before: Lisp, scheme, Nemerle, etc.. It just requires to be a bit careful with 
the semantic definitions.

AFAIK all these languages target a virtual machine.


Nope. See http://www.cons.org/cmucl/ for a native lisp compiler.

That looks to me, as if it compiles to a virtual machine IR, then compiles that. The real question is whether the characteristics of the real machine are allowed to affect front-end semantics. Do any of those languages do that?

Reply via email to