Am 18.08.2014 20:56, schrieb b:
A good reason is the ability to write lock-free algorithms, which are
very hard to implement without GC support. This is the main reason why
C++11 has a GC API and Herb Sutter will be discussing about GC in C++
at CppCon.

  *some* lock free algorithms benefit from GC, there is still plenty you
can do without GC, just look at TBB.

Sure, but you need to be a very good expert to pull them off.


Reference counting is only a win over GC with compiler support for
reducing increment/decrement operations via dataflow analysis.

C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr are
slower than other languages with GC support, because those classes are
plain library types without compiler support. Of course, compiler
vendors can have blessed library types, but the standard does not
require it.

  Not really accurate. First of all, don't include unique_ptr as if it
had the same overhead as the other two, it doesn't.

Yes it does, when you do cascade destruction of large data structures.


  With RC you pay a price during creation/deletion/sharing, but not
while it is alive.
  With GC you pay almost no cost during allocation/deletion, but a
constant cost while it is alive. You allocate enough objects and the sum
cost ant so small.

  Besides that, in C++ it works like this.
  90% of objects: value types, on stack or embedded into other objects
  9% of objects: unique types, use unique_ptr, no overhead
  ~1% of objects: shared, use shared_ptr/weak_ptr etc.

It is more than 1% I would say, because in many cases where you have an unique_ptr, you might need a shared_ptr instead, or go unsafe and give direct access to the underlying pointer.

For example, parameters and temporaries, where you can be sure no one else is using the pointer, but afterwards as a consequence of destructor invocation the data is gone.


  With GC you give up deterministic behavior, which is *absolutely* not
worth giving up for 1% of objects.

Being a GC enabled systems programming language does not forbid the presence of deterministic memory management support, for the use cases that really need it.



  I think most people simply haven't worked in an environment that
supports unique/linear types. So everyone assumes that you need a GC.
Rust is showing that this is nonsense, as C++ has already done for
people using C++11.


I know C++ pretty well (using it since 1993), like it a lot, but I also think we can get better than it.

Specially since I had the luck to get to know systems programming languages with GC like Modula-3 and Oberon(-2). The Oberon OS had quite
a few nice concepts that go way back to Mesa/Cedar at Xerox PARC.

Rust is also showing how complex a type system needs to be to handle all memory management cases. Not sure how many developers will jump into it.

For example, currently you can only concatenate strings if they are both heap allocated.

There are still some issues with operations that mix lifetimes being
sorted out.


--
Paulo

Reply via email to