dsimcha wrote:
Agreed.  Also, whether or not GC is the default in a language deeply affects the
de facto standard way of doing things.  For example, in C++, you don't have nice
slice syntax in STL because STL is designed for use with manual memory 
management.
 Also, from what I've seen and heard, you tend to lose a lot of the theoretical
efficiency of manual memory management when you replace it with ref counting
(read:  poor man's GC), or lots of copying to make every object have a clear
owner.  I translated a small but computationally intensive machine learning
program from C++ to D a few months ago, and the D version was drastically faster
because it avoided lots of excess copying.  Granted, the C++ version *could* 
have
been more optimized, but in the real world, people don't have forever to 
optimize
all this stuff away.

Modern C++ practices (as best as I can tell) make extensive use of ref-counting memory management, i.e. shared_ptr<>. This requires TWO allocations for each use, not just one. The second allocation is for the reference counting management.

Next, every time you copy a pointer (like pass it to a function, return it, etc.) you have to increment, then decrement, the reference count. If you've got multithreading on, this requires a mutex or the usual memory syncing primitives which are 100x slower than regular memory access.

You can escape the inc/dec overhead by converting the shared ptr to a regular pointer, but then you lose all the safety guarantees.

Reply via email to