Well I can only describe how arc works and I can assure you that Rust/Lobster
work very similarly and none of these languages have "compile-time reference
counting", strictly speaking.
Nim's integers, floats, enums, bools and objects of these and arrays of these
_always_ are "value" based types. And always have been. They are embedded into
the host container. They are not necessarily allocated on the stack (though
usually they are), for example:
type
O = object
a: array[2, int]
proc main =
var x = (ref O)(a: [1, 2]) # aha! we have an array here! and it's not
allocated on the stack!
Run
`a` is directly _embedded into_ the `O` and we put it onto the heap. The `x`
itself is stored on the stack and it points into the heap. Now the question is:
When is the block inside the heap freed? Under --gc:arc/orc it's always at the
end of `main`. Under the other GCs it's "you don't know". That is true for C++
shared_ptr, unique_ptr, Rust's equivalents and whatever Lobster's name for
these things is.
Why is that? Well there is one pointer to the `ref O`, it's a _uniquely_
referenced memory cell. Now let's make this program more complex:
proc main =
var x = (ref O)(a: [1, 2])
var y = x
Run
Now we have 2 references to `ref O`. When it is freed? Still at the end of
`main`. Why? Because that's how reference counting works. What's the point of
move semantics then? It makes the assignment `var y = x` cheaper by exploiting
that `x` isn't used afterwards. Does this affect "deterministic" memory
management? No.
Ok, so about this example:
proc construct(p: int): ref O =
var x = (ref O)(a: [1, 2])
if haltingProblem(p):
result = x
else:
result = nil
proc main =
let x = construct(12)
Run
When is the `ref O` object really freed? Well it depends on a halting problem,
either it's freed right after `construct` or after `main`. Why is that? Because
unique pointers are really a 1 bit reference counting system. Note how even
uniqueness doesn't help all that much with "deterministic" memory management
because the uniqueness means "0 or 1" and not "always 1". However, in practice,
if you do a minimal amount of testing or reasoning about your code, the runtime
profile of your code remains analysable. That's true for both classic reference
counting like C++'s shared_ptr and Rust's 1-bit reference counting or various
schemes in between where you optimize away more and more RC operations
("compile-time reference counting").
So why is this "better" for "hard realtime" systems than classical tracing GC
algorithms or copying GCs or Jamaica's hard realtime GC? It's better in the
sense that it attaches a simpler _cost model_ to a program where some
modularity is preserved. Your subsystem allocates N objects on the heap? The
costs are N deallocations when the subsystem is done, regardless of other
subsystems in your program.