> As can be seen in this Wandbox implementation of the above, the execution 
> speed of this naive RC'd solution is actually faster than the GC'ed version 
> for non-threaded/non-atomic implementations, but over twice as slow when 
> atomic reference counting is added

This is misleading. The main cost of naive reference counting comes from 
assigning pointers to local variables (including argument passing). Your 
microbenchmark only does reference count operations when assigning to heap 
locations and does not represent a typical workload. With naive reference 
counting, operations such as `for p in someSeqOfRefs` become much more 
expensive.

> Why not use the C++ approach, "unique" ref's where they can be used 
> efficiently and "shared" RC'ed ref's when it doesn't really work, with the 
> emphasis on getting the RC'ed version out until the "owned" version can be 
> tested and tweaked for some extra efficiency where warranted?

Without proper type system support (substructural types), unique pointers are 
the worst of both worlds. They are acceptable in C++, because C++ is not 
memory-safe to begin with, so the fact that unique pointers are not memory-safe 
does not make the situation much worse. Unique pointers exist to facilitate 
RAII and do not per se help with memory safety. If you want to make them 
memory-safe, you require type system support and/or have to accept additional 
overhead.

Likewise, C++ shared pointers are a compromise solution born out of the 
inherent limitations of a weakly typed low-level language. They incur 
considerable overhead for common code, even for a non-atomic version, and to 
add insult to injury, the atomic version is not actually thread-safe without 
taking additional precautions.

There is no good reason to adopt the C++ approach for a language that is not 
similarly restricted in its design.

Reply via email to