On Saturday, 19 April 2014 at 17:41:58 UTC, Walter Bright wrote:
On 4/19/2014 6:14 AM, Dicebot wrote:
On Thursday, 17 April 2014 at 22:04:17 UTC, Walter Bright wrote:
On 4/17/2014 1:03 PM, John Colvin wrote:
E.g. you can implement some complicated function foo that writes to a user-provided output range and guarantee that all GC usage is in the control of
the caller and his output range.

As mentioned elsewhere here, it's easy enough to do a unit test for this.

Erm, no? You can possibly track GC calls by using custom druntime fork but you can't track origins of such calls in source tree without compiler help.

@nogc is there to help.


The advantage of having this as language instead of documentation is the turtles-all-the-way-down principle: if some function deep inside the call chain under foo decides to use a GC buffer then it's a compile-time-error.

And that's how @nogc works.

And it is not good enough for practical reasons, i.e. we won't be able to use
@nogc for most of the Phobos.

The first step is to identify the parts of Phobos that unnecessarily use the GC. @nogc will help a lot with this.

I feel like the origin of the discussion has been completely lost here and we don't speak the same language right now. The very point I have made initially is that @nogc in a way it is defined in your DIP is too restrictive to be effectively used in Phobos.

In lot of standard library functions you may actually need to allocate as part of algorithm, strict @nogc is not applicable there. However, it is still extremely useful that no _hidden_ allocations happen outside of weel-defined user API and this is something that less restrictive version of @nogc could help with.

The fact that you propose me to use unit tests to verify same guarantees hints that I have completely failed to explain my proposal but I can't really rephrase it any better without some help from your side to identify the point of confusion.

Reply via email to