> If you mean that objects may not actually be in use any longer even though > they are reachable, manual memory management doesn't fix that, either.
That is what I meant and yes it doesn't "fix" it, but it mitigates the problems - simply because the programmer is more aware. There is also a much clearer "error state" available, you leak memory iff the number of deallocations is not equal to the number of allocations. (For this you need to ignore some allocations that are allocated at program startup and never freed by design, but this is still easy to solve.) Contrast that with the GC solution: You need to watch your program's memory usage and at some arbitrary point decide that it seems to have stable memory usage. > Yes, but the underlying problems don't go away just because you jettison the > GC; a GC for multi-threaded programs with a shared heap solves hard problems > (which is why it is hard to write), and those problems don't become easier if > you don't have the GC. But the problems do become easier because the GC solves a very general problem and so it must make compromises that home-grown solutions do not have to make. "Hard realtime memory management" is not hard, a "hard realtime" GC is. That's the secret of C++'s ongoing success, you create an "inferior" memory management scheme, but it's _yours_ , it's not a black box and you're guaranteed you can tweak it until it fits your problem domain. (Except for when you use `shared_ptr` everywhere...) > Also, languages without a GC will almost invariably end up building an ad-hoc > reference counting scheme for whenever an ownership-based approach is not > expressive enough, getting you a subpar GC anyway. It happened with C++, D > with @nogc, and with Rust. D with @nogc is also going in the other direction, trying to avoid the GC. Rust is hard to judge, `Arc<T>` is easier to use than its ownership system and so people end up using it in lots of places where it's not required. Maybe, I don't know its ecosystem well enough to judge really.
