The following snippet is interesting:

<<<
__gshared int step = 0;
__gshared int[] globalArray;

ref int[] getBase()
{
    assert(step == 0);
    ++step;
    return globalArray;
}

int getLowerBound(size_t dollar)
{
    assert(step == 1);
    ++step;
    assert(dollar == 0);
    globalArray = [ 666 ];
    return 1;
}

int getUpperBound(size_t dollar)
{
    assert(step == 2);
    ++step;
    assert(dollar == 1);
    globalArray = [ 1, 2, 3 ];
    return 3;
}

// LDC issue #1433
void main()
{
    auto r = getBase()[getLowerBound($) .. getUpperBound($)];
    assert(r == [ 2, 3 ]);
}


Firstly, it fails with DMD 2.071 because $ in the upper bound expression is 0, i.e., it doesn't reflect the updated length (1) after evaluating the lower bound expression. LDC does. Secondly, DMD 2.071 throws a RangeError, most likely because it's using the initial length for the bounds checks too.

Most interesting IMO though is the question when the slicee's pointer is to be loaded. This is only relevant if the base is an lvalue and may therefore be modified when evaluating the bound expressions. Should the returned slice be based on the slicee's buffer before or after evaluating the bounds expressions? This has been triggered by https://github.com/ldc-developers/ldc/issues/1433 as LDC loads the pointer before evaluating the bounds.

Reply via email to