Re: How about a 100% CTFE?

2011-11-10 Thread Don

On 10.11.2011 06:43, foobar wrote:

Don Wrote:


On 09.11.2011 16:30, foobar wrote:

Don Wrote:


On 09.11.2011 09:17, foobar wrote:

Don Wrote:


On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The remaining
limitations on CTFE are intentional.

With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.

version(ARM)
{
   immutable X = armSpecificCode(); // you want to run this on an X86???
}



I think we discussed those issues before.
1. size_t.sizeof:
auto a = mixin(size_t.sizeof); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU


That doesn't work. mixin happens _before_ CTFE. CTFE never does any
semantics whatsoever.



If I wasn't clear before - the above example is meant to illustrate how 
multilevel compilation *should* work.


Sorry, I don't understand what you're suggesting here.


If you want, we can make it even clearer by replacing 'mixin' above with 
'macro'.

That doesn't help me. Unless it means that what you're talking about
goes far, far beyond CTFE.

Take std.bigint as an example, and suppose we're generating code for ARM
on an x86 machine. The final executable is going to be using BigInt, and
CTFE uses it as well.
The semantic pass begins. The semantic pass discards all of the
x86-specific code in favour of the ARM-specific stuff. Now CTFE runs.
How can it then run natively on x86? All the x86 code is *gone* by then.
How do you deal with this?



What I'm suggesting is to generalize C++'s two level compilation into arbitrary 
N-level compilation.
The model is actually simpler and requires much less language support.
It works like this:
level n: you write *regular* run-time code as a macro using the compiler's 
public API to access its data structures. You compile into object form plug-ins loadable 
by the compiler.
level n+1 : you write *regular* run-time code. You provide the compiler the 
relevant plug-ins from level n, the compiler loads them and then compiles level 
n+1 code.
of course, this has arbitrary n levels since you could nest macros.

So to answer your question, I do want to go far beyond CTFE.
The problem you describe above happens due to the fact that CTFE is *not* a 
separate compilation step. With my model bigint would be compiled twice - once 
for X86 and another for ARM.


OK, that's fair enough. I agree that it's possible to make a language 
which works that way. (Sounds pretty similar to Nemerle).
But it's fundamentally different from D. I'd guess about 25% of the 
front-end, and more than 50% of Phobos, is incompatible with that idea.

I really don't see how that concept can be compatible with D.


As far as I understand this scenario is currently impossible - you can't use 
bigint with CTFE.


Only because I haven't made the necessary changes to bigint. 
CTFE-compatible code already exists. It will use the D versions of the 
low-level functions for ctfe. The magic __ctfe variable is the trick 
used to retain a CTFE, platform-independent version of the code, along 
with the processor-specific code.


I chose the BigInt example because it's something which people really 
will want to do, and it's a case where it would be a huge advantage to 
run native code.




Re: How about a 100% CTFE?

2011-11-10 Thread foobar
Don Wrote:

 On 10.11.2011 06:43, foobar wrote:
  Don Wrote:
 
  On 09.11.2011 16:30, foobar wrote:
  Don Wrote:
 
  On 09.11.2011 09:17, foobar wrote:
  Don Wrote:
 
  On 07.11.2011 14:13, Gor Gyolchanyan wrote:
  After reading
 
  http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
  
  https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
 
  I had a thought:
  Why not compile and run CTFE code in a separate executable, write it's
  output into a file, read that file and include it's contents into the
  object file being compiled?
  This would cover 100% of D in CTFE, including external function calls
  and classes;
  String mixins would simply repeat the process of compiling and running
  an extra temporary executable.
 
  This would open up immense opportunities for such things as
  library-based build systems and tons of unprecedented stuff, that I
  can't imagine ATM.
 
  First comment: classes and exceptions now work in dmd git. The 
  remaining
  limitations on CTFE are intentional.
 
  With what you propose:
  Cross compilation is a _big_ problem. It is not always true that the
  source CPU is the same as the target CPU. The most trivial example,
  which applies already for DMD 64bit, is size_t.sizeof. Conditional
  compilation can magnify these differences. Different CPUs don't just
  need different backend code generation; they may be quite different in
  the semantic pass. I'm not sure that this is solvable.
 
  version(ARM)
  {
 immutable X = armSpecificCode(); // you want to run this on an 
  X86???
  }
 
 
  I think we discussed those issues before.
  1. size_t.sizeof:
  auto a = mixin(size_t.sizeof); // HOST CPU
  auto a = size_t.sizeof; // TARGET CPU
 
  That doesn't work. mixin happens _before_ CTFE. CTFE never does any
  semantics whatsoever.
 
 
  If I wasn't clear before - the above example is meant to illustrate how 
  multilevel compilation *should* work.
 
  Sorry, I don't understand what you're suggesting here.
 
  If you want, we can make it even clearer by replacing 'mixin' above with 
  'macro'.
  That doesn't help me. Unless it means that what you're talking about
  goes far, far beyond CTFE.
 
  Take std.bigint as an example, and suppose we're generating code for ARM
  on an x86 machine. The final executable is going to be using BigInt, and
  CTFE uses it as well.
  The semantic pass begins. The semantic pass discards all of the
  x86-specific code in favour of the ARM-specific stuff. Now CTFE runs.
  How can it then run natively on x86? All the x86 code is *gone* by then.
  How do you deal with this?
 
 
  What I'm suggesting is to generalize C++'s two level compilation into 
  arbitrary N-level compilation.
  The model is actually simpler and requires much less language support.
  It works like this:
  level n: you write *regular* run-time code as a macro using the 
  compiler's public API to access its data structures. You compile into 
  object form plug-ins loadable by the compiler.
  level n+1 : you write *regular* run-time code. You provide the compiler the 
  relevant plug-ins from level n, the compiler loads them and then compiles 
  level n+1 code.
  of course, this has arbitrary n levels since you could nest macros.
 
  So to answer your question, I do want to go far beyond CTFE.
  The problem you describe above happens due to the fact that CTFE is *not* a 
  separate compilation step. With my model bigint would be compiled twice - 
  once for X86 and another for ARM.
 
 OK, that's fair enough. I agree that it's possible to make a language 
 which works that way. (Sounds pretty similar to Nemerle).
 But it's fundamentally different from D. I'd guess about 25% of the 
 front-end, and more than 50% of Phobos, is incompatible with that idea.
 I really don't see how that concept can be compatible with D.

I agree that it's pretty different from D but this is how I envision the 
perfect language. One can dream ... :)
I can see that it may be too late for D to adopt such a model even though I 
think it's far superior. As you said it entails re-writing a lot of code. It's 
a separate question whether it's worth the effort - I think this models 
simplifies the code greatly but it is a huge change. 

 
  As far as I understand this scenario is currently impossible - you can't 
  use bigint with CTFE.
 
 Only because I haven't made the necessary changes to bigint. 
 CTFE-compatible code already exists. It will use the D versions of the 
 low-level functions for ctfe. The magic __ctfe variable is the trick 
 used to retain a CTFE, platform-independent version of the code, along 
 with the processor-specific code.
 
 I chose the BigInt example because it's something which people really 
 will want to do, and it's a case where it would be a huge advantage to 
 run native code.
 

With this __ctfe trick we already get much closer to my suggested design - it's 
possible to write compile-time only code as long as the code obeys some 

Re: How about a 100% CTFE?

2011-11-09 Thread foobar
Don Wrote:

 On 07.11.2011 14:13, Gor Gyolchanyan wrote:
  After reading
 
   http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
   https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
 
  I had a thought:
  Why not compile and run CTFE code in a separate executable, write it's
  output into a file, read that file and include it's contents into the
  object file being compiled?
  This would cover 100% of D in CTFE, including external function calls
  and classes;
  String mixins would simply repeat the process of compiling and running
  an extra temporary executable.
 
  This would open up immense opportunities for such things as
  library-based build systems and tons of unprecedented stuff, that I
  can't imagine ATM.
 
 First comment: classes and exceptions now work in dmd git. The remaining 
 limitations on CTFE are intentional.
 
 With what you propose:
 Cross compilation is a _big_ problem. It is not always true that the 
 source CPU is the same as the target CPU. The most trivial example, 
 which applies already for DMD 64bit, is size_t.sizeof. Conditional 
 compilation can magnify these differences. Different CPUs don't just 
 need different backend code generation; they may be quite different in 
 the semantic pass. I'm not sure that this is solvable.
 
 version(ARM)
 {
 immutable X = armSpecificCode(); // you want to run this on an X86???
 }
 

I think we discussed those issues before. 
1. size_t.sizeof: 
auto a = mixin(size_t.sizeof); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU

2. version (ARM) example - this needs clarification of the semantics. Two 
possible options are:
a. immutable X = ... will be performed on TARGET as is the case today. require 
'mixin' to call it on HOST. This should be backwards compatible since we keep 
the current CTFE and add support for multi-level compilation.
b. immutable x = ... is run via the new system which requires the function 
armSpecificCode to be compiled ahead of time and provided to the compiler in 
an object form. No Platform incompatibility is possible. 

I don't see any problems with cross-compilation. It is a perfectly sound design 
(Universal Turing machine) and it was successfully implemented several times 
before: Lisp, scheme, Nemerle, etc.. It just requires to be a bit careful with 
the semantic definitions. 


Re: How about a 100% CTFE?

2011-11-09 Thread Gor Gyolchanyan
I knew I'm not crazy :-)
I knew there would be at least somebody, who will see what I meant :-)

On Wed, Nov 9, 2011 at 12:17 PM, foobar f...@bar.com wrote:
 Don Wrote:

 On 07.11.2011 14:13, Gor Gyolchanyan wrote:
  After reading
 
       http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
       https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
 
  I had a thought:
  Why not compile and run CTFE code in a separate executable, write it's
  output into a file, read that file and include it's contents into the
  object file being compiled?
  This would cover 100% of D in CTFE, including external function calls
  and classes;
  String mixins would simply repeat the process of compiling and running
  an extra temporary executable.
 
  This would open up immense opportunities for such things as
  library-based build systems and tons of unprecedented stuff, that I
  can't imagine ATM.

 First comment: classes and exceptions now work in dmd git. The remaining
 limitations on CTFE are intentional.

 With what you propose:
 Cross compilation is a _big_ problem. It is not always true that the
 source CPU is the same as the target CPU. The most trivial example,
 which applies already for DMD 64bit, is size_t.sizeof. Conditional
 compilation can magnify these differences. Different CPUs don't just
 need different backend code generation; they may be quite different in
 the semantic pass. I'm not sure that this is solvable.

 version(ARM)
 {
     immutable X = armSpecificCode(); // you want to run this on an X86???
 }


 I think we discussed those issues before.
 1. size_t.sizeof:
 auto a = mixin(size_t.sizeof); // HOST CPU
 auto a = size_t.sizeof; // TARGET CPU

 2. version (ARM) example - this needs clarification of the semantics. Two 
 possible options are:
 a. immutable X = ... will be performed on TARGET as is the case today. 
 require 'mixin' to call it on HOST. This should be backwards compatible since 
 we keep the current CTFE and add support for multi-level compilation.
 b. immutable x = ... is run via the new system which requires the function 
 armSpecificCode to be compiled ahead of time and provided to the compiler 
 in an object form. No Platform incompatibility is possible.

 I don't see any problems with cross-compilation. It is a perfectly sound 
 design (Universal Turing machine) and it was successfully implemented several 
 times before: Lisp, scheme, Nemerle, etc.. It just requires to be a bit 
 careful with the semantic definitions.



Re: How about a 100% CTFE?

2011-11-09 Thread Don

On 09.11.2011 09:17, foobar wrote:

Don Wrote:


On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

  http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
  https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The remaining
limitations on CTFE are intentional.

With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.

version(ARM)
{
 immutable X = armSpecificCode(); // you want to run this on an X86???
}



I think we discussed those issues before.
1. size_t.sizeof:
auto a = mixin(size_t.sizeof); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU


That doesn't work. mixin happens _before_ CTFE. CTFE never does any 
semantics whatsoever.




2. version (ARM) example - this needs clarification of the semantics. Two 
possible options are:
a. immutable X = ... will be performed on TARGET as is the case today. require 
'mixin' to call it on HOST. This should be backwards compatible since we keep 
the current CTFE and add support for multi-level compilation.
b. immutable x = ... is run via the new system which requires the function 
armSpecificCode to be compiled ahead of time and provided to the compiler in 
an object form. No Platform incompatibility is possible.

I don't see any problems with cross-compilation. It is a perfectly sound design 
(Universal Turing machine) and it was successfully implemented several times 
before: Lisp, scheme, Nemerle, etc.. It just requires to be a bit careful with 
the semantic definitions.


AFAIK all these languages target a virtual machine.





Re: How about a 100% CTFE?

2011-11-09 Thread foobar
Don Wrote:

 On 09.11.2011 09:17, foobar wrote:
  Don Wrote:
 
  On 07.11.2011 14:13, Gor Gyolchanyan wrote:
  After reading
 
http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
 
  I had a thought:
  Why not compile and run CTFE code in a separate executable, write it's
  output into a file, read that file and include it's contents into the
  object file being compiled?
  This would cover 100% of D in CTFE, including external function calls
  and classes;
  String mixins would simply repeat the process of compiling and running
  an extra temporary executable.
 
  This would open up immense opportunities for such things as
  library-based build systems and tons of unprecedented stuff, that I
  can't imagine ATM.
 
  First comment: classes and exceptions now work in dmd git. The remaining
  limitations on CTFE are intentional.
 
  With what you propose:
  Cross compilation is a _big_ problem. It is not always true that the
  source CPU is the same as the target CPU. The most trivial example,
  which applies already for DMD 64bit, is size_t.sizeof. Conditional
  compilation can magnify these differences. Different CPUs don't just
  need different backend code generation; they may be quite different in
  the semantic pass. I'm not sure that this is solvable.
 
  version(ARM)
  {
   immutable X = armSpecificCode(); // you want to run this on an X86???
  }
 
 
  I think we discussed those issues before.
  1. size_t.sizeof:
  auto a = mixin(size_t.sizeof); // HOST CPU
  auto a = size_t.sizeof; // TARGET CPU
 
 That doesn't work. mixin happens _before_ CTFE. CTFE never does any 
 semantics whatsoever.
 

If I wasn't clear before - the above example is meant to illustrate how 
multilevel compilation *should* work. 
If you want, we can make it even clearer by replacing 'mixin' above with 
'macro'.

 
  2. version (ARM) example - this needs clarification of the semantics. Two 
  possible options are:
  a. immutable X = ... will be performed on TARGET as is the case today. 
  require 'mixin' to call it on HOST. This should be backwards compatible 
  since we keep the current CTFE and add support for multi-level compilation.
  b. immutable x = ... is run via the new system which requires the function 
  armSpecificCode to be compiled ahead of time and provided to the compiler 
  in an object form. No Platform incompatibility is possible.
 
  I don't see any problems with cross-compilation. It is a perfectly sound 
  design (Universal Turing machine) and it was successfully implemented 
  several times before: Lisp, scheme, Nemerle, etc.. It just requires to be a 
  bit careful with the semantic definitions.
 
 AFAIK all these languages target a virtual machine.
 

Nope. See http://www.cons.org/cmucl/ for a native lisp compiler. 



Re: How about a 100% CTFE?

2011-11-09 Thread Don

On 09.11.2011 16:30, foobar wrote:

Don Wrote:


On 09.11.2011 09:17, foobar wrote:

Don Wrote:


On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

   http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
   https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The remaining
limitations on CTFE are intentional.

With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.

version(ARM)
{
  immutable X = armSpecificCode(); // you want to run this on an X86???
}



I think we discussed those issues before.
1. size_t.sizeof:
auto a = mixin(size_t.sizeof); // HOST CPU
auto a = size_t.sizeof; // TARGET CPU


That doesn't work. mixin happens _before_ CTFE. CTFE never does any
semantics whatsoever.



If I wasn't clear before - the above example is meant to illustrate how 
multilevel compilation *should* work.


Sorry, I don't understand what you're suggesting here.


If you want, we can make it even clearer by replacing 'mixin' above with 
'macro'.
That doesn't help me. Unless it means that what you're talking about 
goes far, far beyond CTFE.


Take std.bigint as an example, and suppose we're generating code for ARM 
on an x86 machine. The final executable is going to be using BigInt, and 
CTFE uses it as well.
The semantic pass begins. The semantic pass discards all of the 
x86-specific code in favour of the ARM-specific stuff. Now CTFE runs. 
How can it then run natively on x86? All the x86 code is *gone* by then. 
How do you deal with this?





2. version (ARM) example - this needs clarification of the semantics. Two 
possible options are:
a. immutable X = ... will be performed on TARGET as is the case today. require 
'mixin' to call it on HOST. This should be backwards compatible since we keep 
the current CTFE and add support for multi-level compilation.
b. immutable x = ... is run via the new system which requires the function 
armSpecificCode to be compiled ahead of time and provided to the compiler in 
an object form. No Platform incompatibility is possible.

I don't see any problems with cross-compilation. It is a perfectly sound design 
(Universal Turing machine) and it was successfully implemented several times 
before: Lisp, scheme, Nemerle, etc.. It just requires to be a bit careful with 
the semantic definitions.


AFAIK all these languages target a virtual machine.



Nope. See http://www.cons.org/cmucl/ for a native lisp compiler.


That looks to me, as if it compiles to a virtual machine IR, then 
compiles that. The real question is whether the characteristics of the 
real machine are allowed to affect front-end semantics. Do any of those 
languages do that?




Re: How about a 100% CTFE?

2011-11-09 Thread Don

On 07.11.2011 17:00, Gor Gyolchanyan wrote:

Well and somefunction ? It does modify teh value of a too. Is it executed 
before ? after ? What is the value at the end of all that ?


Obviously it will be incremented first.
The order is dependent of the rules by which the conditions are
evaluated at compile-time. For example, the compiler will build a
depth-first list of the import tree and evaluate code sequentially in
each module.


This is not what it does now. At present, the order of compile-time 
evaluation is not defined; DMD currently does it vaguely in lexical 
order but that is planned to change in the near future. 'static if' and 
'mixin' will be evaluated in lexical order, before anything else is 
done. Afterwards, everything else will be evaluated on-demand.


Apart from the static if/mixin pass, compilation can proceed in 
parallel (though the current implementation doesn't yet do this) which 
means there's no ordering (multiple items may complete compilation 
simultaneously).


Allowing globals to be modified at compile time would destroy this.


Re: How about a 100% CTFE?

2011-11-09 Thread Ary Manzana

On 11/8/11 10:44 AM, deadalnix wrote:

Le 08/11/2011 14:31, Gor Gyolchanyan a écrit :

Well except that module can modify a and not be in the tree.


A single compilation includes only two parts: compiling D source code
and linking in object files or static libraries. object files and
static libraries are by no means involved in compile-time activity,
which leaves us with with compiling D source code. A single
compilation of D source code can be viewed as a list of import trees,
originating from the enlisted modules to be compiled. After
eliminating duplicates we have a list of small trees, which can be
rooted together to form a single tree, which in turn can be processed
as described above. Optionally, in order to allow separate
compilations, a cache can be maintained to hold the results of
previous compile-time computations to be used in the next ones inside
a project (to be defined).



module a;

int a = 0;

---

module b;

import a;

int somefunction() {
return ++a;
}

static assert(somefunction() = 1);

---

module c;

import a;

int somefunction() {
return ++a;
}

static assert(somefunction() = 1);


There answer here is: who cares?

Is your point to prove that you can make some code that is useless? Why 
not make something useful with CTFE?


It's sad to see people come here proposing great ideas (I think CTFE 
beyond what D is capable is great, and I'm implementing that in my own 
language) and other people come with nonsense useless examples to ask 
What happens here?


Use the tool to make magic, not to show how you can get undefined 
behaviour...


Re: How about a 100% CTFE?

2011-11-09 Thread foobar
Don Wrote:

 On 09.11.2011 16:30, foobar wrote:
  Don Wrote:
 
  On 09.11.2011 09:17, foobar wrote:
  Don Wrote:
 
  On 07.11.2011 14:13, Gor Gyolchanyan wrote:
  After reading
 
 http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
 
  https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c
 
  I had a thought:
  Why not compile and run CTFE code in a separate executable, write it's
  output into a file, read that file and include it's contents into the
  object file being compiled?
  This would cover 100% of D in CTFE, including external function calls
  and classes;
  String mixins would simply repeat the process of compiling and running
  an extra temporary executable.
 
  This would open up immense opportunities for such things as
  library-based build systems and tons of unprecedented stuff, that I
  can't imagine ATM.
 
  First comment: classes and exceptions now work in dmd git. The remaining
  limitations on CTFE are intentional.
 
  With what you propose:
  Cross compilation is a _big_ problem. It is not always true that the
  source CPU is the same as the target CPU. The most trivial example,
  which applies already for DMD 64bit, is size_t.sizeof. Conditional
  compilation can magnify these differences. Different CPUs don't just
  need different backend code generation; they may be quite different in
  the semantic pass. I'm not sure that this is solvable.
 
  version(ARM)
  {
immutable X = armSpecificCode(); // you want to run this on an 
  X86???
  }
 
 
  I think we discussed those issues before.
  1. size_t.sizeof:
  auto a = mixin(size_t.sizeof); // HOST CPU
  auto a = size_t.sizeof; // TARGET CPU
 
  That doesn't work. mixin happens _before_ CTFE. CTFE never does any
  semantics whatsoever.
 
 
  If I wasn't clear before - the above example is meant to illustrate how 
  multilevel compilation *should* work.
 
 Sorry, I don't understand what you're suggesting here.
 
  If you want, we can make it even clearer by replacing 'mixin' above with 
  'macro'.
 That doesn't help me. Unless it means that what you're talking about 
 goes far, far beyond CTFE.
 
 Take std.bigint as an example, and suppose we're generating code for ARM 
 on an x86 machine. The final executable is going to be using BigInt, and 
 CTFE uses it as well.
 The semantic pass begins. The semantic pass discards all of the 
 x86-specific code in favour of the ARM-specific stuff. Now CTFE runs. 
 How can it then run natively on x86? All the x86 code is *gone* by then. 
 How do you deal with this?
 

What I'm suggesting is to generalize C++'s two level compilation into arbitrary 
N-level compilation. 
The model is actually simpler and requires much less language support. 
It works like this: 
level n: you write *regular* run-time code as a macro using the compiler's 
public API to access its data structures. You compile into object form plug-ins 
loadable by the compiler.
level n+1 : you write *regular* run-time code. You provide the compiler the 
relevant plug-ins from level n, the compiler loads them and then compiles level 
n+1 code.
of course, this has arbitrary n levels since you could nest macros. 

So to answer your question, I do want to go far beyond CTFE. 
The problem you describe above happens due to the fact that CTFE is *not* a 
separate compilation step. With my model bigint would be compiled twice - once 
for X86 and another for ARM. 
As far as I understand this scenario is currently impossible - you can't use 
bigint with CTFE. 

 
 
  2. version (ARM) example - this needs clarification of the semantics. Two 
  possible options are:
  a. immutable X = ... will be performed on TARGET as is the case today. 
  require 'mixin' to call it on HOST. This should be backwards compatible 
  since we keep the current CTFE and add support for multi-level 
  compilation.
  b. immutable x = ... is run via the new system which requires the 
  function armSpecificCode to be compiled ahead of time and provided to 
  the compiler in an object form. No Platform incompatibility is possible.
 
  I don't see any problems with cross-compilation. It is a perfectly sound 
  design (Universal Turing machine) and it was successfully implemented 
  several times before: Lisp, scheme, Nemerle, etc.. It just requires to be 
  a bit careful with the semantic definitions.
 
  AFAIK all these languages target a virtual machine.
 
 
  Nope. See http://www.cons.org/cmucl/ for a native lisp compiler.
 
 That looks to me, as if it compiles to a virtual machine IR, then 
 compiles that. The real question is whether the characteristics of the 
 real machine are allowed to affect front-end semantics. Do any of those 
 languages do that?
 



Re: How about a 100% CTFE?

2011-11-08 Thread Don

On 07.11.2011 21:36, Martin Nowak wrote:

On Mon, 07 Nov 2011 20:42:17 +0100, Trass3r u...@known.com wrote:


version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly
for reasons of speed rather than functionality.


How would JIT work in the above case?


You would need to do JIT a target agnostic IR.


Yeah, it's only the glue layer which needs to change. The main 
complexity is dealing with pointers.


Re: How about a 100% CTFE?

2011-11-08 Thread Kagamin
Martin Nowak Wrote:

  How would JIT work in the above case?
 
 You would need to do JIT a target agnostic IR.

AFAIK, target-agnostic LLVM IR is impossible. It's in FAQ.


Re: How about a 100% CTFE?

2011-11-08 Thread deadalnix

Le 07/11/2011 17:00, Gor Gyolchanyan a écrit :

Well and somefunction ? It does modify teh value of a too. Is it executed 
before ? after ? What is the value at the end of all that ?


Obviously it will be incremented first.
The order is dependent of the rules by which the conditions are
evaluated at compile-time. For example, the compiler will build a
depth-first list of the import tree and evaluate code sequentially in
each module. As i already said, it works just like at run-time.
Is it so hard to imagine taking the compile-time code, run it during
compilation separately with the exact same rules as it would during
compilation?



Well except that module can modify a and not be in the tree.


Well, if you don't see any problem, you should probably just stop trying to 
provide a solution.


Ok, this is just ridiculous. Are you serious about this question or
are you trolling to say the least? In either way, I don't see any
problem in _having mutable compile-time values_.



I'm not trolling, I'm dead serious ! If you have hard time to figure out 
what the problems are, it is unlikely that you come up with a satisfying 
solution, except by being lucky.


Re: How about a 100% CTFE?

2011-11-08 Thread Gor Gyolchanyan
 Well except that module can modify a and not be in the tree.

A single compilation includes only two parts: compiling D source code
and linking in object files or static libraries. object files and
static libraries are by no means involved in compile-time activity,
which leaves us with with compiling D source code. A single
compilation of D source code can be viewed as a list of import trees,
originating from the enlisted modules to be compiled. After
eliminating duplicates we have a list of small trees, which can be
rooted together to form a single tree, which in turn can be processed
as described above. Optionally, in order to allow separate
compilations, a cache can be maintained to hold the results of
previous compile-time computations to be used in the next ones inside
a project (to be defined).

 If you have hard time to figure out what the problems are, it is unlikely 
 that you come up with a satisfying solution, except by being lucky.

When i say, i don't see problems, that means, that you didn't present
me a problem, which i could not resolve. You point to a problem, i
resolve it and vice verse until one of us fails. This is called a
discussion.

On Tue, Nov 8, 2011 at 4:56 PM, deadalnix deadal...@gmail.com wrote:
 Le 07/11/2011 17:00, Gor Gyolchanyan a écrit :

 Well and somefunction ? It does modify teh value of a too. Is it executed
 before ? after ? What is the value at the end of all that ?

 Obviously it will be incremented first.
 The order is dependent of the rules by which the conditions are
 evaluated at compile-time. For example, the compiler will build a
 depth-first list of the import tree and evaluate code sequentially in
 each module. As i already said, it works just like at run-time.
 Is it so hard to imagine taking the compile-time code, run it during
 compilation separately with the exact same rules as it would during
 compilation?


 Well except that module can modify a and not be in the tree.

 Well, if you don't see any problem, you should probably just stop trying
 to provide a solution.

 Ok, this is just ridiculous. Are you serious about this question or
 are you trolling to say the least? In either way, I don't see any
 problem in _having mutable compile-time values_.


 I'm not trolling, I'm dead serious ! If you have hard time to figure out
 what the problems are, it is unlikely that you come up with a satisfying
 solution, except by being lucky.



Re: How about a 100% CTFE?

2011-11-08 Thread deadalnix

Le 08/11/2011 14:31, Gor Gyolchanyan a écrit :

Well except that module can modify a and not be in the tree.


A single compilation includes only two parts: compiling D source code
and linking in object files or static libraries. object files and
static libraries are by no means involved in compile-time activity,
which leaves us with with compiling D source code. A single
compilation of D source code can be viewed as a list of import trees,
originating from the enlisted modules to be compiled. After
eliminating duplicates we have a list of small trees, which can be
rooted together to form a single tree, which in turn can be processed
as described above. Optionally, in order to allow separate
compilations, a cache can be maintained to hold the results of
previous compile-time computations to be used in the next ones inside
a project (to be defined).



module a;

int a = 0;

---

module b;

import a;

int somefunction() {
return ++a;
}

static assert(somefunction() = 1);

---

module c;

import a;

int somefunction() {
return ++a;
}

static assert(somefunction() = 1);

Which assert will fail ? Does one will fail ? Note that two different 
instances of the compiler will compile b and c, and they are exscluded 
of one another import tree so each instance isn't aware of the other.



If you have hard time to figure out what the problems are, it is unlikely that 
you come up with a satisfying solution, except by being lucky.


When i say, i don't see problems, that means, that you didn't present
me a problem, which i could not resolve. You point to a problem, i
resolve it and vice verse until one of us fails. This is called a
discussion.



Well that my whole point. You are not aware of the problem on the topic 
you try to solve. You are not qualified for the job. I have no 
responsability on educating you.


YOU are comming with a solution, YOU have to explain how it solves every 
problems. Everything else is flawed logic.


Re: How about a 100% CTFE?

2011-11-08 Thread Gor Gyolchanyan
 Which assert will fail ? Does one will fail ? Note that two different 
 instances of the compiler will compile b and c, and they are exscluded of one 
 another import tree so each instance isn't aware of the other.

Both will succeed and on both cases a will be 1 because both run
independently. It's similar to asking if two different runs of the
same program, which increment a global variable will see each others'
results. The answer is: no.

 Well that my whole point. You are not aware of the problem on the topic you 
 try to solve. You are not qualified for the job. I have no responsability on 
 educating you.

I'm well-aware of what I'm dealing with. What I'm not aware of is the
list of things you don't understand in my proposals (which I'm
currently resolving). You don't understand how _this_ feature would
work with _that_ feature without conflicting and I explain how
(proposing one or more way to do it). If your misunderstandings are
not over yet, it's not reasonable to assume, that my explanations
won't follow them. If, however, i fail to explain something, I'll
withdraw my proposal and restore it if and when i find an answer.

 YOU are comming with a solution, YOU have to explain how it solves every 
 problems. Everything else is flawed logic.

Nothing ever solves every problem. That's a very unwise thing to say
for someone, who claims to be logical. problems come and go depending
on how the features are used. Each problem (or bunches of problems)
must be dealt with explicitly (using inductive logic to account for
other possible problems). As i said before, if you have problems, i'll
help resolve them. If i don't - i'll withdraw my proposal.
On Tue, Nov 8, 2011 at 5:44 PM, deadalnix deadal...@gmail.com wrote:
 Le 08/11/2011 14:31, Gor Gyolchanyan a écrit :

 Well except that module can modify a and not be in the tree.

 A single compilation includes only two parts: compiling D source code
 and linking in object files or static libraries. object files and
 static libraries are by no means involved in compile-time activity,
 which leaves us with with compiling D source code. A single
 compilation of D source code can be viewed as a list of import trees,
 originating from the enlisted modules to be compiled. After
 eliminating duplicates we have a list of small trees, which can be
 rooted together to form a single tree, which in turn can be processed
 as described above. Optionally, in order to allow separate
 compilations, a cache can be maintained to hold the results of
 previous compile-time computations to be used in the next ones inside
 a project (to be defined).


 module a;

 int a = 0;

 ---

 module b;

 import a;

 int somefunction() {
        return ++a;
 }

 static assert(somefunction() = 1);

 ---

 module c;

 import a;

 int somefunction() {
        return ++a;
 }

 static assert(somefunction() = 1);

 Which assert will fail ? Does one will fail ? Note that two different
 instances of the compiler will compile b and c, and they are exscluded of
 one another import tree so each instance isn't aware of the other.

 If you have hard time to figure out what the problems are, it is unlikely
 that you come up with a satisfying solution, except by being lucky.

 When i say, i don't see problems, that means, that you didn't present
 me a problem, which i could not resolve. You point to a problem, i
 resolve it and vice verse until one of us fails. This is called a
 discussion.


 Well that my whole point. You are not aware of the problem on the topic you
 try to solve. You are not qualified for the job. I have no responsability on
 educating you.

 YOU are comming with a solution, YOU have to explain how it solves every
 problems. Everything else is flawed logic.



Re: How about a 100% CTFE?

2011-11-08 Thread deadalnix

Le 08/11/2011 14:54, Gor Gyolchanyan a écrit :

Which assert will fail ? Does one will fail ? Note that two different instances 
of the compiler will compile b and c, and they are exscluded of one another 
import tree so each instance isn't aware of the other.


Both will succeed and on both cases a will be 1 because both run
independently. It's similar to asking if two different runs of the
same program, which increment a global variable will see each others'
results. The answer is: no.



Ok I seen your point. The problem is that a is a global variable and is 
incremented 2 times. But at the end, it will ends up that it is 
incremented only one time. Maybe 0.


I think this behaviour should be avoided because of the oinconsistency 
of what is happening at compile time and what you get as a result at run 
time. Some limitation should be added to the solution to not end up in 
thoses kind of behaviour IMO.



YOU are comming with a solution, YOU have to explain how it solves every 
problems. Everything else is flawed logic.


Nothing ever solves every problem. That's a very unwise thing to say
for someone, who claims to be logical. problems come and go depending
on how the features are used. Each problem (or bunches of problems)
must be dealt with explicitly (using inductive logic to account for
other possible problems). As i said before, if you have problems, i'll
help resolve them. If i don't - i'll withdraw my proposal.


Sorry my mistake, I didn't express myself the right way. I should have 
saud how it solve or do not solve every problems. Obviously, the 
solution to every problem usually do not exists.


The point is that every problems related to the solution must be 
considered to know if the solution is really solving something usefull 
and if its drawback (problem that are not solved or created by the given 
solution).


Re: How about a 100% CTFE?

2011-11-08 Thread Martin Nowak

On Tue, 08 Nov 2011 13:51:28 +0100, Kagamin s...@here.lot wrote:


Martin Nowak Wrote:


 How would JIT work in the above case?

You would need to do JIT a target agnostic IR.


AFAIK, target-agnostic LLVM IR is impossible. It's in FAQ.


http://www.llvm.org/docs/FAQ.html#platformindependent ?
Transforming a source file to platform independent IR is not needed.
You need to interpret the IR that is emitted after semantic reduction, not  
the

one before.

martin


Re: How about a 100% CTFE?

2011-11-08 Thread Gor Gyolchanyan
Agreed. The drawbacks must be studies completely. That's why I
appreciate all your criticism :-)
About inconsistency. True, this is a bit confusing from one
perspective. But i think we can define a straight-forward way of doing
this.
Here's what I'm thinking about. We currently have two stages of compilation:
* translate to object files
* link into executable files

I suggest separating compile-time preparation into third stage, that
would look like this:
* translate into run-time code
* translate into object files
* link into executable files

With this scenario, the compiler will run the first stage on ALL files
first, which will reflect the changes in all files. Then, the run-time
code could be arbitrarily compiled and linked as it's done ATM.
The key feature is, that when you compile your code (even if you do it
with separate compile passes), you always run the CTFE for all files
before you do anything else.
And the exact logic of CTFE can be defined in a single consistent
manner. If the import tree implies a certain dependencies, then the
order of evaluation will honor it, otherwise it will rearrange at
will.
After all, if you increment a compile-time value and you don't specify
any relationship with other files, you must either make it private
(don't provide a public incrementing) or be ready to get arbitrarily
incremented value.

On Tue, Nov 8, 2011 at 6:56 PM, deadalnix deadal...@gmail.com wrote:
 Le 08/11/2011 14:54, Gor Gyolchanyan a écrit :

 Which assert will fail ? Does one will fail ? Note that two different
 instances of the compiler will compile b and c, and they are exscluded of
 one another import tree so each instance isn't aware of the other.

 Both will succeed and on both cases a will be 1 because both run
 independently. It's similar to asking if two different runs of the
 same program, which increment a global variable will see each others'
 results. The answer is: no.


 Ok I seen your point. The problem is that a is a global variable and is
 incremented 2 times. But at the end, it will ends up that it is incremented
 only one time. Maybe 0.

 I think this behaviour should be avoided because of the oinconsistency of
 what is happening at compile time and what you get as a result at run time.
 Some limitation should be added to the solution to not end up in thoses kind
 of behaviour IMO.

 YOU are comming with a solution, YOU have to explain how it solves every
 problems. Everything else is flawed logic.

 Nothing ever solves every problem. That's a very unwise thing to say
 for someone, who claims to be logical. problems come and go depending
 on how the features are used. Each problem (or bunches of problems)
 must be dealt with explicitly (using inductive logic to account for
 other possible problems). As i said before, if you have problems, i'll
 help resolve them. If i don't - i'll withdraw my proposal.

 Sorry my mistake, I didn't express myself the right way. I should have saud
 how it solve or do not solve every problems. Obviously, the solution to
 every problem usually do not exists.

 The point is that every problems related to the solution must be considered
 to know if the solution is really solving something usefull and if its
 drawback (problem that are not solved or created by the given solution).



Re: How about a 100% CTFE?

2011-11-08 Thread Martin Nowak

On Tue, 08 Nov 2011 09:08:49 +0100, Don nos...@nospam.com wrote:


On 07.11.2011 21:36, Martin Nowak wrote:

On Mon, 07 Nov 2011 20:42:17 +0100, Trass3r u...@known.com wrote:


version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an  
X86???

}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly
for reasons of speed rather than functionality.


How would JIT work in the above case?


You would need to do JIT a target agnostic IR.


Yeah, it's only the glue layer which needs to change. The main  
complexity is dealing with pointers.


Currently it only seems viable for strongly pure functions.
One would still need to be very careful about target dependencies through  
libc calls (sinl et.al.),
marshalling the arguments and the result is necessary and dmd doesn't have  
a virtual machine, does it?
One could try to fake the latter by compiling to executables, but it would  
require a target specific runtime/stdlib.


Overall I don't have too much performance issues with CTFE.

I've more issues with badly scaling compiler algorithms now that they
are used with magnitudes bigger input, i.e. unrolled loops or huge array  
literals.
Especially considering that they performance impact mostly stems from  
codegen
and sometime the generated code is only used for compile time  
initialization.


A lot of garbage is created during CTFE as in-place changes don't always  
work.

For example I haven't yet succeeded in writing an in-place CTFE quicksort,
so I ended up with 'sort(left) ~ pivot ~ sort(right)'.

martin


Re: How about a 100% CTFE?

2011-11-08 Thread Timon Gehr

On 11/08/2011 07:35 PM, Martin Nowak wrote:

On Tue, 08 Nov 2011 09:08:49 +0100, Don nos...@nospam.com wrote:


On 07.11.2011 21:36, Martin Nowak wrote:

On Mon, 07 Nov 2011 20:42:17 +0100, Trass3r u...@known.com wrote:


version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an
X86???
}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly
for reasons of speed rather than functionality.


How would JIT work in the above case?


You would need to do JIT a target agnostic IR.


Yeah, it's only the glue layer which needs to change. The main
complexity is dealing with pointers.


Currently it only seems viable for strongly pure functions.


The JIT could just translate impure and unsafe constructs to 
exceptions/errors being thrown (or similar). It would then work the same 
way as it does now.




One would still need to be very careful about target dependencies
through libc calls (sinl et.al.),
marshalling the arguments and the result is necessary and dmd doesn't
have a virtual machine, does it?
One could try to fake the latter by compiling to executables, but it
would require a target specific runtime/stdlib.

Overall I don't have too much performance issues with CTFE.

I've more issues with badly scaling compiler algorithms now that they
are used with magnitudes bigger input, i.e. unrolled loops or huge array
literals.
Especially considering that they performance impact mostly stems from
codegen
and sometime the generated code is only used for compile time
initialization.

A lot of garbage is created during CTFE as in-place changes don't always
work.
For example I haven't yet succeeded in writing an in-place CTFE quicksort,
so I ended up with 'sort(left) ~ pivot ~ sort(right)'.

martin




Re: How about a 100% CTFE?

2011-11-07 Thread deadalnix

This doesn't make any sens.

Function can modify the state of the program in a non repeatable way, 
thus compile time function execution isn't possible for thoses.


CTFE must be limited to functions with no side effect, or with side 
effect that are known and manageable.


Le 07/11/2011 14:13, Gor Gyolchanyan a écrit :

After reading

 http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
 https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.




Re: How about a 100% CTFE?

2011-11-07 Thread Gor Gyolchanyan
Can you show an example of when CTFE modifies the state of the program
and when it's impossible to perform at run-time, provided, that it's a
separate compiler-aware run-time.

 CTFE must be limited to functions with no side effect, or with side effect 
 that are known and manageable.

Why? There are tons of applications for CTFE other, then initializing variables.

On Mon, Nov 7, 2011 at 5:25 PM, deadalnix deadal...@gmail.com wrote:
 This doesn't make any sens.

 Function can modify the state of the program in a non repeatable way, thus
 compile time function execution isn't possible for thoses.

 CTFE must be limited to functions with no side effect, or with side effect
 that are known and manageable.

 Le 07/11/2011 14:13, Gor Gyolchanyan a écrit :

 After reading

     http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
     https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

 I had a thought:
 Why not compile and run CTFE code in a separate executable, write it's
 output into a file, read that file and include it's contents into the
 object file being compiled?
 This would cover 100% of D in CTFE, including external function calls
 and classes;
 String mixins would simply repeat the process of compiling and running
 an extra temporary executable.

 This would open up immense opportunities for such things as
 library-based build systems and tons of unprecedented stuff, that I
 can't imagine ATM.




Re: How about a 100% CTFE?

2011-11-07 Thread Gor Gyolchanyan
 CTFE must be limited to functions with no side effect, or with side effect 
 that are known and manageable.

A D front-end can be library-based and completely CTFE-able. This
brings a huge set of things you can do with such a front-end.
You can create a full-fledged library-based D compiler if necessary.

On Mon, Nov 7, 2011 at 5:35 PM, Gor Gyolchanyan
gor.f.gyolchan...@gmail.com wrote:
 Can you show an example of when CTFE modifies the state of the program
 and when it's impossible to perform at run-time, provided, that it's a
 separate compiler-aware run-time.

 CTFE must be limited to functions with no side effect, or with side effect 
 that are known and manageable.

 Why? There are tons of applications for CTFE other, then initializing 
 variables.

 On Mon, Nov 7, 2011 at 5:25 PM, deadalnix deadal...@gmail.com wrote:
 This doesn't make any sens.

 Function can modify the state of the program in a non repeatable way, thus
 compile time function execution isn't possible for thoses.

 CTFE must be limited to functions with no side effect, or with side effect
 that are known and manageable.

 Le 07/11/2011 14:13, Gor Gyolchanyan a écrit :

 After reading

     http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
     https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

 I had a thought:
 Why not compile and run CTFE code in a separate executable, write it's
 output into a file, read that file and include it's contents into the
 object file being compiled?
 This would cover 100% of D in CTFE, including external function calls
 and classes;
 String mixins would simply repeat the process of compiling and running
 an extra temporary executable.

 This would open up immense opportunities for such things as
 library-based build systems and tons of unprecedented stuff, that I
 can't imagine ATM.





Re: How about a 100% CTFE?

2011-11-07 Thread deadalnix

module a;

int a = 0;

int somefunction() {
a++;
return a;
}

static if(somefunction() == 1) {
// Some code
}

now what is the value of a ? At run time ? At compile ime ? If some 
other CTFE uses a ? If a CTFE from another module uses a or worse, 
update it ?


exemple :

module b;

import a;

int someotherfunction() {
a--;
return a;
}

static if(someotherfunction()  10) {
// Some code

import std.stdio;

immutable int b = somefunction();
static this() {
writeln(b);
}
}

What the hell this program is supposed to print at run time ?

Le 07/11/2011 14:35, Gor Gyolchanyan a écrit :

Can you show an example of when CTFE modifies the state of the program
and when it's impossible to perform at run-time, provided, that it's a
separate compiler-aware run-time.


CTFE must be limited to functions with no side effect, or with side effect that 
are known and manageable.


Why? There are tons of applications for CTFE other, then initializing variables.

On Mon, Nov 7, 2011 at 5:25 PM, deadalnixdeadal...@gmail.com  wrote:

This doesn't make any sens.

Function can modify the state of the program in a non repeatable way, thus
compile time function execution isn't possible for thoses.

CTFE must be limited to functions with no side effect, or with side effect
that are known and manageable.

Le 07/11/2011 14:13, Gor Gyolchanyan a écrit :


After reading

 http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
 https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.







Re: How about a 100% CTFE?

2011-11-07 Thread Gor Gyolchanyan
 now what is the value of a ? At run time ? At compile ime ? If some other 
 CTFE uses a ? If a CTFE from another module uses a or worse, update it ?

Run-time: Whatever _a_ ends up with by the time CTFE is done. There's
a very handy _COUNTER_ macro in Visual C++ compiler, that uses this
very feature internally.
Compile-time: The same, that it would have if run in run-time:
sequentially updated after the compile-time value (naturally).

 What the hell this program is supposed to print at run time ?

Assuming that somefunction was never called, this would print -1.
That's because _a_ has a resulting compile-time value if -1 due to
having decremented once by the someotherfunction() inside the static
if and b is being initialized by -1.

I don't see any problems in either of these cases.
The whole point of a mutable compile-time value is to have essentially
two values, one of which is computed at compile-time and the other,
which is statically initialized by the first one.

On Mon, Nov 7, 2011 at 6:14 PM, deadalnix deadal...@gmail.com wrote:

 module a;

 int a = 0;

 int somefunction() {
        a++;
        return a;
 }

 static if(somefunction() == 1) {
        // Some code
 }

 now what is the value of a ? At run time ? At compile ime ? If some other 
 CTFE uses a ? If a CTFE from another module uses a or worse, update it ?

 exemple :

 module b;

 import a;

 int someotherfunction() {
        a--;
        return a;
 }

 static if(someotherfunction()  10) {
        // Some code

        import std.stdio;

        immutable int b = somefunction();
        static this() {
                writeln(b);
        }
 }

 What the hell this program is supposed to print at run time ?

 Le 07/11/2011 14:35, Gor Gyolchanyan a écrit :

 Can you show an example of when CTFE modifies the state of the program
 and when it's impossible to perform at run-time, provided, that it's a
 separate compiler-aware run-time.

 CTFE must be limited to functions with no side effect, or with side effect 
 that are known and manageable.

 Why? There are tons of applications for CTFE other, then initializing 
 variables.

 On Mon, Nov 7, 2011 at 5:25 PM, deadalnixdeadal...@gmail.com  wrote:

 This doesn't make any sens.

 Function can modify the state of the program in a non repeatable way, thus
 compile time function execution isn't possible for thoses.

 CTFE must be limited to functions with no side effect, or with side effect
 that are known and manageable.

 Le 07/11/2011 14:13, Gor Gyolchanyan a écrit :

 After reading

     http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
     https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

 I had a thought:
 Why not compile and run CTFE code in a separate executable, write it's
 output into a file, read that file and include it's contents into the
 object file being compiled?
 This would cover 100% of D in CTFE, including external function calls
 and classes;
 String mixins would simply repeat the process of compiling and running
 an extra temporary executable.

 This would open up immense opportunities for such things as
 library-based build systems and tons of unprecedented stuff, that I
 can't imagine ATM.





Re: How about a 100% CTFE?

2011-11-07 Thread deadalnix

Le 07/11/2011 15:30, Gor Gyolchanyan a écrit :

now what is the value of a ? At run time ? At compile ime ? If some other CTFE 
uses a ? If a CTFE from another module uses a or worse, update it ?


Run-time: Whatever _a_ ends up with by the time CTFE is done. There's
a very handy _COUNTER_ macro in Visual C++ compiler, that uses this
very feature internally.
Compile-time: The same, that it would have if run in run-time:
sequentially updated after the compile-time value (naturally).


What the hell this program is supposed to print at run time ?


Assuming that somefunction was never called, this would print -1.
That's because _a_ has a resulting compile-time value if -1 due to
having decremented once by the someotherfunction() inside the static
if and b is being initialized by -1.



Well and somefunction ? It does modify teh value of a too. Is it 
executed before ? after ? What is the value at the end of all that ?



I don't see any problems in either of these cases.
The whole point of a mutable compile-time value is to have essentially
two values, one of which is computed at compile-time and the other,
which is statically initialized by the first one.



Well, if you don't see any problem, you should probably just stop trying 
to provide a solution.


Re: How about a 100% CTFE?

2011-11-07 Thread Gor Gyolchanyan
 Well and somefunction ? It does modify teh value of a too. Is it executed 
 before ? after ? What is the value at the end of all that ?

Obviously it will be incremented first.
The order is dependent of the rules by which the conditions are
evaluated at compile-time. For example, the compiler will build a
depth-first list of the import tree and evaluate code sequentially in
each module. As i already said, it works just like at run-time.
Is it so hard to imagine taking the compile-time code, run it during
compilation separately with the exact same rules as it would during
compilation?

 Well, if you don't see any problem, you should probably just stop trying to 
 provide a solution.

Ok, this is just ridiculous. Are you serious about this question or
are you trolling to say the least? In either way, I don't see any
problem in _having mutable compile-time values_.

On Mon, Nov 7, 2011 at 7:49 PM, deadalnix deadal...@gmail.com wrote:
 Le 07/11/2011 15:30, Gor Gyolchanyan a écrit :

 now what is the value of a ? At run time ? At compile ime ? If some other
 CTFE uses a ? If a CTFE from another module uses a or worse, update it ?

 Run-time: Whatever _a_ ends up with by the time CTFE is done. There's
 a very handy _COUNTER_ macro in Visual C++ compiler, that uses this
 very feature internally.
 Compile-time: The same, that it would have if run in run-time:
 sequentially updated after the compile-time value (naturally).

 What the hell this program is supposed to print at run time ?

 Assuming that somefunction was never called, this would print -1.
 That's because _a_ has a resulting compile-time value if -1 due to
 having decremented once by the someotherfunction() inside the static
 if and b is being initialized by -1.


 Well and somefunction ? It does modify teh value of a too. Is it executed
 before ? after ? What is the value at the end of all that ?

 I don't see any problems in either of these cases.
 The whole point of a mutable compile-time value is to have essentially
 two values, one of which is computed at compile-time and the other,
 which is statically initialized by the first one.


 Well, if you don't see any problem, you should probably just stop trying to
 provide a solution.



Re: How about a 100% CTFE?

2011-11-07 Thread Don

On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

 http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
 https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The remaining 
limitations on CTFE are intentional.


With what you propose:
Cross compilation is a _big_ problem. It is not always true that the 
source CPU is the same as the target CPU. The most trivial example, 
which applies already for DMD 64bit, is size_t.sizeof. Conditional 
compilation can magnify these differences. Different CPUs don't just 
need different backend code generation; they may be quite different in 
the semantic pass. I'm not sure that this is solvable.


version(ARM)
{
   immutable X = armSpecificCode(); // you want to run this on an X86???
}



Re: How about a 100% CTFE?

2011-11-07 Thread Gor Gyolchanyan
 classes and exceptions now work in dmd git

AWESOME!

 Cross compilation is a _big_ problem

Actually, this is a very good point. I need to give it a thought. I'm
not saying that it is a very good idea, but it might turn out to be if
it proves to be reliable. In return it would provide a very powerful
tool set to the language.

On Mon, Nov 7, 2011 at 8:49 PM, Don nos...@nospam.com wrote:
 On 07.11.2011 14:13, Gor Gyolchanyan wrote:

 After reading

     http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
     https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

 I had a thought:
 Why not compile and run CTFE code in a separate executable, write it's
 output into a file, read that file and include it's contents into the
 object file being compiled?
 This would cover 100% of D in CTFE, including external function calls
 and classes;
 String mixins would simply repeat the process of compiling and running
 an extra temporary executable.

 This would open up immense opportunities for such things as
 library-based build systems and tons of unprecedented stuff, that I
 can't imagine ATM.

 First comment: classes and exceptions now work in dmd git. The remaining
 limitations on CTFE are intentional.

 With what you propose:
 Cross compilation is a _big_ problem. It is not always true that the source
 CPU is the same as the target CPU. The most trivial example, which applies
 already for DMD 64bit, is size_t.sizeof. Conditional compilation can magnify
 these differences. Different CPUs don't just need different backend code
 generation; they may be quite different in the semantic pass. I'm not sure
 that this is solvable.

 version(ARM)
 {
   immutable X = armSpecificCode(); // you want to run this on an X86???
 }




Re: How about a 100% CTFE?

2011-11-07 Thread Dejan Lekic
Don wrote:

 With what you propose:
 Cross compilation is a _big_ problem. It is not always true that the
 source CPU is the same as the target CPU. The most trivial example,
 which applies already for DMD 64bit, is size_t.sizeof. Conditional
 compilation can magnify these differences. Different CPUs don't just
 need different backend code generation; they may be quite different in
 the semantic pass. I'm not sure that this is solvable.
 
 version(ARM)
 {
 immutable X = armSpecificCode(); // you want to run this on an X86???
 }

A very good point, Don. I cross compile D code quite often. (Host: x86_64-
pc-linux-gnu, Targets: i686-pc-linux-gnu and i686-pc-mingw32)

Second point is - if code uses lots of CTFE's (many of them may call each 
other recursively) then what Gor proposes may *significantly* slow down 
compilation time.


Re: How about a 100% CTFE?

2011-11-07 Thread Martin Nowak

On Mon, 07 Nov 2011 17:49:30 +0100, Don nos...@nospam.com wrote:


On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

 http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
 https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The remaining  
limitations on CTFE are intentional.


With what you propose:
Cross compilation is a _big_ problem. It is not always true that the  
source CPU is the same as the target CPU. The most trivial example,  
which applies already for DMD 64bit, is size_t.sizeof. Conditional  
compilation can magnify these differences. Different CPUs don't just  
need different backend code generation; they may be quite different in  
the semantic pass. I'm not sure that this is solvable.


version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}



I've always thought that it would be worthwhile to experiment with LLVM's  
JIT engine here.
But as has been said quite some care will be necessary for cross  
compilation.

Allowing arbitrary non-pure functions would cause lots troubles.


Re: How about a 100% CTFE?

2011-11-07 Thread Don

On 07.11.2011 19:59, Martin Nowak wrote:

On Mon, 07 Nov 2011 17:49:30 +0100, Don nos...@nospam.com wrote:


On 07.11.2011 14:13, Gor Gyolchanyan wrote:

After reading

http://prowiki.org/wiki4d/wiki.cgi?DMDSourceGuide
https://github.com/gor-f-gyolchanyan/dmd/blob/master/src/interpret.c

I had a thought:
Why not compile and run CTFE code in a separate executable, write it's
output into a file, read that file and include it's contents into the
object file being compiled?
This would cover 100% of D in CTFE, including external function calls
and classes;
String mixins would simply repeat the process of compiling and running
an extra temporary executable.

This would open up immense opportunities for such things as
library-based build systems and tons of unprecedented stuff, that I
can't imagine ATM.


First comment: classes and exceptions now work in dmd git. The
remaining limitations on CTFE are intentional.

With what you propose:
Cross compilation is a _big_ problem. It is not always true that the
source CPU is the same as the target CPU. The most trivial example,
which applies already for DMD 64bit, is size_t.sizeof. Conditional
compilation can magnify these differences. Different CPUs don't just
need different backend code generation; they may be quite different in
the semantic pass. I'm not sure that this is solvable.

version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}



I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly for 
reasons of speed rather than functionality.




Re: How about a 100% CTFE?

2011-11-07 Thread Trass3r

version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly for  
reasons of speed rather than functionality.


How would JIT work in the above case?


Re: How about a 100% CTFE?

2011-11-07 Thread Martin Nowak

On Mon, 07 Nov 2011 20:42:17 +0100, Trass3r u...@known.com wrote:


version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly for  
reasons of speed rather than functionality.


How would JIT work in the above case?


You would need to do JIT a target agnostic IR.


Re: How about a 100% CTFE?

2011-11-07 Thread Timon Gehr

On 11/07/2011 08:42 PM, Trass3r wrote:

version(ARM)
{
immutable X = armSpecificCode(); // you want to run this on an X86???
}


I've always thought that it would be worthwhile to experiment with
LLVM's JIT engine here.
But as has been said quite some care will be necessary for cross
compilation.
Allowing arbitrary non-pure functions would cause lots troubles.


Yeah, I think JIT for CTFE would be *very* interesting. But mainly for
reasons of speed rather than functionality.


How would JIT work in the above case?


IIRC there are attempts on converting x86 assembly to LLVM-IR.
Probably that works for ARM too. ;)