On Saturday, 27 June 2015 at 11:10:49 UTC, Marc Schütz wrote:
On Saturday, 27 June 2015 at 01:18:19 UTC, Jonathan M Davis wrote:
That's a completely orthogonal issue to ref or auto ref. That's an @system operation (since taking the address of a local variable is @system), and it's up to you to not screw it up. scope might cover that if it were fully ironed out, since that does involve escaping, but it also might not, since it's an @system operation and thus up to you not to screw it up.

The point is that with scope, it _doesn't_ need to be @system anymore.

Regardless, it has nothing to do with whether the function accepts lvalues and rvalues with the same parameter. You have a safety issue regardless if you're talking about taking a pointer to an argument (or to some portion of the argument), since there's no guarantee that the lifetime of the pointer is shorter than that of the argument.

No, not with scope. [...] the problems you mention will no longer exist with scope.

Exactly, that's the whole point.
The relation with lvalue/rvalue reference parameters is that the compiler should be able to statically detect and refuse escaping rvalue bugs. Escaping lvalues with potential for dangling pointers should be @system only, but preventing those bugs is a whole different beast.

Still, to make (a) taking the address of ref params, locals & globals and (b) using pointer params @safe, the compiler needs to know whether the parameter escapes or not. I tend to let the compiler figure that out in the future rather than having to use `scoped` a lot for ref and pointer params.

We introduced the return attribute (currently only with -dip25) to fix this problem. So, with that, we eliminate the safety problem with ref itself, and we can safely have auto ref accept rvalues by having it assign them to a temporary variable first.

Yes this is now possible, but why in all world do you want the keyword that enables it to be `auto ref`?

Exactly. For a function returning a ref parameter, it doesn't matter whether it's an rvalue or lvalue. The important thing is to propagate that distinction (automatically by the compiler), to disallow binding a returned rvalue reference to an escaping ref parameter (i.e., no `escape(forward(S(1)))`). Where the distinction between lvalues and rvalues matters for a function returning a ref parameter, as for any other function with ref params, is whether the parameter escapes (i.e., by storing a pointer somewhere). If it does, it should be `return ref`, otherwise `return scope ref` (either explicitly or inferred automatically by the compiler).

Any issues with taking pointers to arguments is firmly in @system territory and is thus completely orthogonal. Even if scope were to be implemented in a way that prevented it, it would still have nothing to do with whether the function accepted both rvalues and lvalues with the same parameter.

Of course it would if one made actually use of a scope system (again, either explicit or as a result of compiler escape analysis) to (a) rule out all escaping rvalue bugs and (b) possibly warn about or highlight escaping lvalue params in @system code. The main aim of the scope system would be to enhance the flexibility of @safe code though.

In the meantime, what about making all `ref` params accept rvalues

Please, please, no. ref indicates that your intention is to mutate the argument. Having it accept rvalues completely destroys that and makes it that much harder to understand what a function is supposed to do.

If that was the only reason for not accepting rvalues as `ref` params, then rvalues should have been allowed for `const ref` right from the beginning, as there's surely absolutely no intention to mutate a `const ref` argument.

I can accept that argument, although I see cases where it's still justified. For example, if the author of the function decided it wants to mutate the argument, but you (the caller) are not interested in the new value. But still, explicitly using a temporary named `dummy` is preferable because it even documents that you're going to ignore it.

I find letting the callee decide whether the caller needs to provide an lvalue (`ref`) or not (non-templated `auto ref`), based solely on making sure the mutations are directly visible by the caller after the call, too much. I think it should be left to the caller to decide what mutations it's interested in.

But now I see why some would oppose calling that `scope ref`, as that really has nothing to do with scope or safety, that'd just be an arbitrary way of classifying ref params in 'essential output' (mutable lvalue `ref`s, mutations need to be visible by caller) and some other category (lvalue/rvalue `auto ref`s, not just for efficiency purposes). That really is an orthogonal and incompatible semantic compared to what `scope ref` is all about, and doesn't interest me at all. I find flexible @safe code and detecting escaping bugs a kazillion times more important than making sure a caller sees all mutations the callee deems essential after the call.

Reply via email to