What you say is entirely correct, albeit incomplete, so let me fill in the 
missing parts:

A simple mark&sweep GC has the very nice property of not adding any overhead to 
pointer assignments. It's fundamentally compatible with manual memory 
management, you can free individual pieces on your own, reducing memory 
pressure and then the GC runs less often. If you manage everything "manually", 
you can disable the GC entirely. It's a memory management "hybrid" and can be 
seen as "gradual" memory management.

Araqsgc offers `dispose` and `deepDispose` operations (these are currently 
being backported to the other GCs) for this reason. The idea is that the stdlib 
uses `deepDispose` in strategic places (async's event loop comes to mind). It 
can also be put into a custom `=destroy`.

For example, consider a "node" based data structure (`json.nim`, `lists.nim`, 
`ropes`...): You can free individual nodes (risky, but often you do know enough 
about your program to do that) or you can bulk-free every node in it.

To do that (mostly) safely and easily, you can encapsulate your data structure 
in a refcounted wrapper like `Refcount[JsonNode]`. This would use refcounting 
for a complete Json graph, not for individual objects. This seems to be a key 
feature for performance. A granularity that works on individual objects is 
almost never desired for performance.

Now this leaves us with the inherently problematic use-after-free problems. 
Under this scheme, they are mitigated, but not solved. But B/D shows that 
detecting use-after-free bugs can be changed from "detect dangerous pointer 
_read_ operations" to "detect potentially dangling refs via refcounting" and 
maybe that's good enough. The overhead seems to be so high that you want to 
disable it in a production setting though. For exploit prevention you can use 
type-based node allocation then.

Having said that, classic B/D with `owned` still looks more elegant... ;-)

Reply via email to