**TLDR;** as follows as my "take-away" so far:
1. owned is just a sink for the new RC'ed ref as applied to new smart
pointers instead of to objects although in fact a smart pointer can be
implemented as an object.
2. We actually need both ref which is a RC'ed smart pointer on the non-shared
heap and shared which is a smart pointer for the shared heap. For those that
know what they are doing, these can be cast to each other once created so that
one can access the shared pointer without the atomic locks when one provides
their own means of avoiding contention (external locks, algorithm ensuring that
there is no simultaneous access by multiple threads, etc.) when one wants to
share data between threads but access them quickly.
3. Thus, owned is a qualifier that can be put on either RC'ed ref or shared
to give sink semantics to either although the underlying structure is still
RC'ed.
4. If it was desired to avoid RC altogether for those cases, owned could be
defined to be its own type of smart pointer without RC but although it would
save resources slightly, it wouldn't be any faster and then the third class of
smart pointer of owned would have to be prohibited from being cast back and
forth between either of RC;ed ref or shared \- not likely worth it.
5. Escaped closures environment smart pointer can normally be RC'ed ref when
non-threaded but likely should be RC'ed shared when threading so that closures
can be passed between threads. An other approach could be to add a pragma
{.closureShared.} which is the default when threads are on but can be
overridden to normal ref closures when one knows what they are doing. I
actually favour this solution, as it gives the most flexibility for performance
without affecting the complexity for the casual user.
6. {.gcSafe.} becomes {.threadSafe.} to override the ability to pass (safe)
globals and RC'ed ref's or objects that used RC'ed ref's across threads when
the programmer deems them safe.
7. We won't need protect/dispose at all, thank goodness! Everything can be
handled by overriding =destroy and = as well as the the sink proc's.
> I hope that by C++'s approach(unique_ptrs) you meant Rust's approach(smart
> unique pointers) because the former's way is neither safe(allows dangling
> pointers) nor nice(useless and verbose wrapper) nor efficient(allows leaks,
> requires extra steps to work with the data).
@trtt, sorry, I meant the **idea** of unique_ptr not the application and
without the complexity of application of Rust's owned pointers. I'm starting to
come around to @Araq's proposal, especially when I read in his answer to a
comment in the RFC that "owned T is not a parameter passing mode and if the
code has no clean ownership I doubt it'll ever work reliably on multicore.";
that says exactly what I am saying here, that owned can't work for parameter
passing (without adding all the complexity of Rust's lifetimes, which they
still haven't made completely work for escaped closures), and therefore also
can't be used for multi-threading. **EDIT ADD: however, with the concept of
`owned` as a `sink` for RC 'ed pointers, I'm starting to wonder if one could
pass an `owned shared` inter-thread...**
> Having a single-alias reference (with a borrow checker), a counted reference
> and an atomically counted reference would be easier to implement than a good
> multi-threaded GC anyway and it would enable safe multi-threaded programming
> where I don't need to deal with the thread-local semantics.
I don't see anywhere where Araq wants to add the complexity of a borrow checker
and the implied ability to track lifetimes programatically as in Rust; however
he does seem to feel that static analysis and debug checks can serve the same
purposes as per the last part of your post. I like this approach as the reason
I am here in Nim and not much seen in Rust these days is Rust's complexity that
way (although it is good for safety), and especially its continued unresolved
way forward on how to handle closures as required for pure functional forms of
algorithm. The hoops one has to go through in order to implement a simple
memoizing lazy linked list in Rust! Those hoops are currently almost as bad in
Nim when one wants the generated lazy list to work across threads, but this
proposal at least may offer a simple way forward as an alternative to
multi-threaded GC (which would also work, but at a cost in complexity and
execution speed as Araq brought up in his original blog post for this idea).
What hasn't been mentioned anywhere is that getting rid of the GC gets rid of
the protect/dispose proc's, which would be a very good thing in that these are
extremely tedious to use yet necessary to use the single-threaded GC heap
allocated structures across threads.
> On the other hand, this would be only useful if Araq wants nim to be used for
> low-level programs(making it a Rust competitor where it'll most likely fail
> due to marketing reasons) because this approach is unnecessarily complex and
> doesn't bring enough value to web/application developers.
I wouldn't presume to assume Araq's intentions for the language. I am here as
compared to Rust because I see that Nim offers a good mix of efficient low
level capability without getting overly complex except that multi-threading got
overly complex due to the limitations of the current GC (other than I also like
Nim's syntax :) ). I am beginning to see that if owned ref's works out, they
won't necessarily add too much complexity to general use of the language and I
also see that their use is optional in order to gain a little more performance
by reducing or eliminating reference counts in some situations. The point of my
original post is not that the owned idea is useless but rather that it is only
a special use case and that RC's "unowned" ref's need to work simply for the
more general case and that therefore that is what needs the most immediate
attention.
One could simply use unowned ref's (which I understand are the default if owned
isn't used), in which one presumably gets non-atomic reference counted pointers
that I've shown in my last post are faster than current GC'ed ref's. The atomic
unowned RC'ed ref's need only be used for Shared heap access, in which case
they would be used just like the non-atomic version but just be slower. To
permit their use this way, we may need one more type as in shared, and with the
default ref being "unowned and unshared" (but still RC'ed) and the default
shared still unowned. Current pointer and ptr T are just the usual weak
pointers that have nothing to do with memory persistence but could be checked
or have a weak T version checked to see if they are still valid. Now, Araq has
raised the problems with RC'ed pointers, but I see that this also needs the
application of Swift's Automatic Reference Counting but just that we have to
foresee and avoid its downfall when de-allocating when a huge number of
references go out of scope. I think some consideration needs to be made on
where yet another not-very-smart pointer type of week needs to exist when data
races need to be broken.
The relationship and conversion rules between the different types of ref's need
to be ironed out, and I think that may be helped by combining this with the
application of sink/lent/=destroy, etc. as are already available to objects.
The comments in the RFC tell me that owned pointers are just some kind of super
application of these former implemented ideas, and it would seem to have merit
to somehow combine them. Let's look at those re-writing rules and consider how
they apply to RC'ed ref's (and therefore non-RC'ed owned and atomically RC'ed
shared):
> Rule Pattern Transformed into
>
> 1.1 `var x: T; stmts var x: T; try stmts finally: `=destroy`(x)`
For the above, any assignment just adds a refcnt; there is no assignment until
the parameter is used even if it is to a temporary (or implicitly done in
place).
> 1.2 `var x: sink T; stmts var x: sink T; stmts; ensureEmpty(x)`
As per the comments in the RFC, owned is sink for ref's has its own rule for
"ensureEmpty" which is that there should be no other dangling pointers to it.
> 2 `x = f() `=sink`(x, f())`
Here, again sink is owned and it just means that the results of a function
generating a new unowned with no references can be converted to a owned.
> 3 `x = lastReadOf z `=sink`(x, z); reset(z)`
For this one, we have two cases for z, if it is owned than ownership is
transferred and the original location is disarm'ed instead of reset, if it is
unowned or shared than it must have no other references (refcnt == 1) else a
runtime error.
> 4.1 `y = sinkParam `=sink`(y, sinkParam)`
This one doesn't apply as owned can't be passed; if it is one of the other two
than the rules above apply.
> 4.2 `x = y `=`(x, y) # a copy`
Easy - owned has no copy, the others just add a refcnt and refer to the same
thing.
> 5.1 `f_sink(g()) f_sink(g())`
>
> 5.2 `f_sink(y) f_sink(copy y); # copy unless it's the last read`
>
> 5.3 `f_sink(move y) f_sink(y); reset(y) # explicit moves empties 'y'`
The above three have no use for these as there is no calling with an owned
parameter.
> 5.4 `f_noSink(g()) var tmp = bitwiseCopy(g()); f(tmp); `=destroy`(tmp)`
If var x is a normal, non-sink ref of any of the three types, it gets destroyed
as per the standard definition for "destroy" for RC'ed pointers - a dec ref
count; if other ref's are declared, then they also get destroyed in the same
way with my only qualification that nested try/finally's should be analysed to
not build stack to avoid Swift's problem.
I see that we could have Swift's simplicity of automatic reference count and
avoid the "gotcha" of excessive stack use by being careful of how the dispose
list is handled so as to be chased with a loop rather than a stack structure.
If the program can statically show the "lastReadOf" any given value, then the
"dispose" = dec ref cnt could happen then and there without any complications.
So no complexities, just use them!
To test this, l'm using the ideas above to build shared and unshared memoizing
lazy lists and will test to be sure that they can be consumed and that it
doesn't have Swift's problems. It's looking to be very simple once the
infrastructure and libraries are in place. I'll post it somewhere when complete
if anyone is interested.
Obviously, this is a Work In Progress and ideas evolve as one tries to use
them, but I'm starting to love the concept.