On Wednesday, 29 December 2021 at 16:51:47 UTC, rempas wrote:
On Wednesday, 29 December 2021 at 16:27:22 UTC, max haughton wrote:
Inlining + constant propagation. Fancier iterations on those exist too but 90% of the speedup will come from those since for it to matter they likely would've been used in first place.

Sounds like black magic? So If I write this:

```
int add(int num, int num2) { return num1 + num2; }

void main() {
  int number = add(10, 20);
}
```

The parameters are literals so will D translate this to:

```
int add(int num, int num2) { return num1 + num2; } // Normal one
int add_temp_func() { return 30; } // Created for the function call in main. No `add` instruction

void main() {
int number = add(10, 20); // Will actually create and call "add_temp_func"
}
```

Or even better, this:

```
int add(int num, int num2) { return num1 + num2; }
void main() {
int number = add(10, 20); // What we will type and it will get replaced with the following line int number = 30; // So it calculates the result at compile times and doesn't even do a function call
}
```

Is this what D can do? This is what I'm talking about when saying been able to use values at compile time.

This is handled by the compiler backend. The simplest way it can do this kind of optimization is by "inlining" the function.

This is done by transplanting the function body into the place it's used. At this point the compiler simply sees "= 30 + 30" which it can trivially turn into "= 60" through something called constant-folding.

The compiler can create new function bodies (like the temp one you introduce above) but this is a much more niche optimization. They favour inlining much more aggressively.

I'm tempted to do a YouTube video of a D program being compiled all the way down the machine code, to show what the compiler does for you.

Reply via email to