One use case I can think of for specializing functions based on whether or not its arguments are compile-time evaluable:

// Big container that can't be accessed in constant time:
immutable cachedResults = init();

double getResult(<args>)
    if (areCompileTimeConstants!(<args>) == false)
{
    return cachedResults.at(<args>);
}

double getResult(<args>)
    if (areCompileTimeConstants!(<args>) == true)
{
    // Computing the result takes long time
    ...
    return computedResult;
}

Point being that A) cachedResults takes so much memory we don't want to evaluate it at compile-time and bloat the executable, and B) accessing cachedResults takes some non-trivial time, so we don't want to do that at runtime if it can be done at compile-time. Don't know how common this kind of thing would be though.


But, that made me think...
In a perfect world, I think, the compiler would always evaluate all possible functions at compile-time, given that doing so would produce a smaller (or equal size) executable than what not-evaluating-at-compile-time would produce. For example (assuming the following initialization functions are compile-time evaluable):

// The following wouldn't be evaluated at compile time,
// because that function call (probably) wouldn't take
// as much space in the executable as million ints:

int[1_000_000] bigArray = initBigArray();

// The following would be always evaluated at compile time,
// because a single int value would take less space in the
// executable than the function call:

int myValue = initMyValue();

Although, to speed up test compilations, we'd need a compiler flag to disable this "aggressive" CTFE behaviour.

Reply via email to