I have the following code:

    double[string] foo;
    foo["a"] += 1;

how is the opOpAssign on the AA defined? Is it defined to set the value to the value to the right of the opOpAssign if it isn't set for primitives or does it add the given value onto T.init?

Doing

    foo["b"]++;

gives me 1, so this looks like it comes from an arbitrary 0, however when I do

    double v = foo["c"]++;

I wanted to find out what initial value it is basing the increment off, however on dmd this triggered a segfault and on ldc this gave me 0. Where did the 0 come from? double.init should be NaN

When I have a custom user defined type like

struct Foo
{
    int x = 4;

    ref Foo opOpAssign(string op : "+")(int v)
    {
        x += v;
        return this;
    }

    Foo opBinary(string op : "+")(int v)
    {
        return Foo(x + v);
    }
}

void main()
{
    Foo[string] foo;
    foo["a"] += 2;
    writeln(foo);
}

it will give me a range violation at runtime and not init it for me at all.

There is `aa.require("a", Foo.init) += 4;` now which solves this, but I would prefer having the small simple syntax well defined for all types instead of only primitives. Also I don't see anywhere in the specification that `require` must actually return a ref value, so I can't trust this either.

Reply via email to