The code

import std.stdio;

void main(string[] args)
{
    import std.datetime.stopwatch : benchmark;
    import core.time : Duration;
    import core.memory : GC;

    immutable benchmarkCount = 1;

    foreach (const i; 0 .. 10)
    {
        const byteCount = i*100_000_000;
        const array = new byte[byteCount]; // one Gig
const Duration[1] results = benchmark!(GC.collect)(benchmarkCount); writefln("%s bytes: Calling GC.collect() took %s nsecs after %s", byteCount, cast(double)results[0].total!"nsecs"/benchmarkCount, array.ptr);
    }
}

prints when release-compiled with ldc

0 bytes: Calling GC.collect() took 600 nsecs after null
100000000 bytes: Calling GC.collect() took 83000 nsecs after 7F785ED44010 200000000 bytes: Calling GC.collect() took 114600 nsecs after 7F784CF29010 300000000 bytes: Calling GC.collect() took 277600 nsecs after 7F7832201010 400000000 bytes: Calling GC.collect() took 400400 nsecs after 7F780E5CC010 500000000 bytes: Calling GC.collect() took 449700 nsecs after 7F77E1A8A010 600000000 bytes: Calling GC.collect() took 481200 nsecs after 7F780E5CC010 700000000 bytes: Calling GC.collect() took 529800 nsecs after 7F77E1A8A010 800000000 bytes: Calling GC.collect() took 547600 nsecs after 7F779A16E010 900000000 bytes: Calling GC.collect() took 925500 nsecs after 7F7749891010

Why is the overhead so big for a single allocation of an array with elements containing no indirections (which the GC doesn't need to scan for pointers).

Reply via email to