On Saturday, 7 October 2017 at 03:15:41 UTC, Laeeth Isharc wrote:
On Saturday, 7 October 2017 at 01:00:41 UTC, Jon Degenhardt
wrote:
Have there been studies quantifying the performance of D's GC
relative to other GC implementations? My anecdotal experience
is that D's GC can have undesirable latency behavior (long
pauses), but throughput appears good. Of course, quantified
metrics would be far preferable to anecdotal observations.
--Jon
Have you tried running the GC instrumentation on your tsv
utilities? That might make for a very interesting blog post.
Well, I have for the tsv utilities and some other programs.
That's what's behind my observations. While interesting, I don't
think I have enough definitive data to draw conclusions for a
blog post. Two specifics:
(1) GC profile data shows long max pause times in several
benchmarks. However, where it has occurs it's clearly associated
with very large AAs. It may not be representative of more common
use cases. (There is more quantification I could do here though.)
(2) The benchmarks I've run are all on throughput oriented tasks.
In these the D programs have compared well to other native
compiled programs, mostly using manual memory management. I think
this does say something to the effect that choosing good
algorithms and memory use approaches is usually more important
than the GC vs manual memory selection. And, it is consistent
with a good throughput story for D's GC, but is hardly a direct
comparison.