On 15/05/2011 7:34 AM, john skaller wrote:
I'm intrigued by the 3 level memory system (stack, ref count, gc) and wonder how
that works out in practice. Does it ever get in the way?

Not sure. We don't have enough experience with it yet. We'll be better able to answer this in a few months.

I'm also a bit curious about ref counting. As a memory management technique it
is known to be slower than a good GC, partly because of the overhead of managing
the count, and partly because doing so can destroy cache coherence by touching
memory that otherwise doesn't need to be touched.

This statement is highly debatable. There are several kinds of GC and several kinds of RC, and they both have different cache behavior (GC trashes the whole cache regularly and doubles heap pressure; RC hurts cache coherence *if* you're doing shared memory multiprocessing but otherwise tends to cost in code overhead..)

Lots of GCs hybridize and RC is nowhere near as dead as its detractors make it out to be.

Also, at least in C++ the primary use of destructors is to release memory
which is not necessary with a GC, and ordered and timely release of memory
is not really useful except perhaps in application with hard real time 
constraints.
On the other hand, ordered synchronous release of some resources (file handles,
locks, etc) is essential, but many languages without destructors (such as Ocaml)
don't seem to have a huge problem with this.

This is pretty subjective. Destructors mean your cleanup routines can do predictable work, in a way that's much more difficult and ambiguous in finalizers. I do think timely release of resources -- including memory -- is more important than you say here. But it's hard to quantify.

Also, destructors have a very serious design
fault: faults in destructors are notoriously hard to handle.

We have a well-defined (not flawless, but I think reasonable) strategy for double-faulting (failing inside a dtor running due to failure). Failure being idempotent and unrecoverable helps.

So I'm curious about the decision to use ref counting and deterministic 
destructors.
[I'm just curious, I think the design is very interesting!]

Ref counting recycles memory sooner, more deterministically, touches less of the heap (blows out the cache less), has lower heap overhead, lower average latency, etc. It's not without merit. Determinstic destructors are ... a unique feature that there's no way to emulate without them. Finalizers and finally blocks do different things, and each only does a part of dtors. From a parsimony perspective, dtors cover both and do both better, so why wouldn't you use them?

FWIW, I also had this 3 level system implemented in my language Felix,
but without the explicit static control Rust provides. What I found was that
the performance overheads, especially in the presence of shared memory
concurrency that Felix supported, were very high.

We do not support shared memory concurrency. RC is non-atomic. And the GC doesn't have to touch the RC subgraphs.

Does Rust ensure ref counted objects can't contain cycles?

Yes. They are (or will be, when this part is actually finished) a separate type kind.

-Graydon
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to