On Monday, 9 May 2016 at 16:57:39 UTC, Stefan Koch wrote:
I have been looking into the DMD now to see what I can do about
Unfortunately It is a pretty big mess to untangle.
Code responsible for CTFE is in at least 3 files.
[dinterpret.d, ctfeexpr.d, constfold.d]
I was shocked to discover that the PowExpression actually
depends on phobos! (depending on the exact codePath it may or
may not compile...)
Yes. This is because of lowering. Walter said in his DConf talk
that lowering was a success; actually, it's a quick-and-dirty
hack that inevitably leads to a disaster.
Lowering always needs to be reverted.
which let to me prematurely stating that it worked at ctfe
My Plan is as follows.
Add a new file for my ctfe-interpreter and update it gradually
to take more and more of the cases the code in the files
mentioned above was used for.
Do Dataflow analysis on the code that is to be ctfe'd so we can
tell beforehand if we need to store state in the ctfe stack or
You don't need dataflow analysis. The CtfeCompile pass over the
semantic tree was intended to determine how many variables are
required by each function.
Or baring proper data-flow analysis: RefCouting the variables
on the ctfe-stack could also be a solution.
I will post more details as soon as I dive deeper into the code.
The current implementation stores persistent state for every
While caching nothing. Not even the compiled for of a function
Because it cannot relax purity.
No. Purity is not why it doesn't save the state. It's because of
I think I need to explain the history of CTFE.
Originally, we had constant-folding. Then constant-folding was
extended to do things like slicing a string at compile time.
Constant folding leaks memory like the Exxon Valdez leaks oil,
but that's OK because it only ever happens once.
Then, the constant folding was extended to include function
calls, for loops, etc. All using the existing constant-folding
code. Now the crappy memory usage is a problem. But it's OK
because the CTFE code was kind of proof-of-concept thing anyway.
Now, everyone asks, why doesn't it use some kind of byte-code
interpreter or something?
Well, the reason is, it just wasn't possible. There was actually
no single CTFE entry point. Instead, it was a complete mess. For
example, with template arguments, the compiler would first try to
run CTFE on the argument, with error messages suppressed. If that
succeeded, it was a template value argument. If it generated
errors, it would then see if was a type. If that failed as well,
it assumed it was a template alias argument.
The other big problem was that CTFE was also often called on a
function which had semantic errors.
So, here is what I did with CTFE:
(1) Implement all the functionality, so that CTFE code can be
developed. The valuable legacy of this, which I am immensely
proud of, is the file "interpret3.d" in the test suite. It is
very comprehensive. If an CTFE implementation passes the test
suite, it's good to go.
The CTFE implementation itself is practically worthless. It's
value was to get the test suite developed.
(2) Created a single entry point for CTFE. This involved working
out rules for every place that CTFE is actually required,
removing the horrid speculative execution of CTFE.
It made sure that functions had actually been semantically
analyzed before they were executed (there were really horrific
cases where the function had its semantic tree modified while it
was being executed!!)
Getting to this point involved about 60 pull requests and six
months of nearly full-time work. Finally it was possible to
consider a byte-code interpreter or JITer.
We reached this point around Nov 2012.
(3) Added a 'ctfeCompile' step which runs over the semantic tree
the first time the function is executed at compile time. Right
now it does nothing much except that check that the semantic tree
is valid. This detected many errors in the rest of the compiler.
We reached this point around March 2013.
My intention was to extend the cfteCompile step to a byte-code
generator. But then I had to stop working on it and concentrate
on looking after my family.
Looking at the code without knowing the history, you'll think,
the obvious way to do this would be with a byte-code generator or
JITer, and wonder why the implementation is so horrible. But for
most of the history, that kind of implementation was just not
People come up with all these elaborate schemes to speed up CTFE.
It's totally not necessary. All that's needed is a very simple