On Mon, Jul 15, 2013 at 3:09 PM, David Jeske <[email protected]> wrote:
> In my idealistic heart, I want the mythical Apache 2.0 licensed > cross-platform CLR + Azul/Zing no-pause GC to be the solution that ends > C-development. However, my engineer mind knows that even in that fantastic > system, which I do think would be a much more capable C/C++ competitor > today, GC tracing work is proportional to pointer-count and > program-duration. There are certain programs for which that model can not > equal C performance. And then there is the fact that the mythical system > does not exist. > That just isn't clear. The problem is that work is roughly identical to the alternative allocation and deallocation work in C/C++. One thing I don't remember from the C4 papers is what overall percentage of CPU time is used by concurrent collection. I know they have a lot of ability to schedule the work in otherwise idle CPU cycles. But the battery piper eventually needs to be paid, so it would be interesting to know the overhead, both in CPU cycles and in cache misses. It would be even better to have an apples-to-apples comparison with manual approaches, but the difference in programming idioms are too great for that to be realistically possible.
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
