On Mon, Jul 15, 2013 at 7:11 PM, David Jeske <[email protected]> wrote:
> On Mon, Jul 15, 2013 at 7:03 PM, Jon Zeppieri <[email protected]> wrote: > >> On Mon, Jul 15, 2013 at 9:39 PM, David Jeske <[email protected]> wrote: >> > As dataset size grows, the GC version consumes increasing amounts of >> CPU and >> > memory bandwidth walking the huge increasing dataset size trying to >> find the >> > discards. >> >> I may be missing your argument, but garbage collectors don't look at >> the entire heap. They only look at live data in the heap. >> > > See point #3... live-data size approaches infinity. > > The point is, GC looks at the entire heap to detect discards. How much > does ARC/manual-C look at? None of it. > David has contrived a pathological case. He doesn't specify a bunch of relevant things, like the ratio of discarded data to live data (which, if low enough, would justify turning GC off entirely). He's also using the wrong metric of performance. With some refinement, he can certainly generate a non-representative example in which manual storage management *might* out-perform a non-concurrent GC. But even there, the operative word is *might*. The overhead of deallocation in the scenario he describes is staggeringly higher than David seems to believe. Jonathan
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
