On Mon, 15 Jun 2015 03:20:47 -0400, Shachar Shemesh <[email protected]> wrote:

On 14/06/15 20:01, bitwise wrote:
On Sun, 14 Jun 2015 12:52:47 -0400, ketmar <[email protected]> wrote:

so it's by design.

Ok, makes sense ;)

   Bit

Well, sortof.

It makes sense, until you try to compile a program that needs more memory than your computer has. Then, all of a sudden, it completely and utterly stops making sense.

Hint: when you need to swap out over 2GB of memory (with 16GB of physical ram installed), this strategy completely and utterly stops making sense.

Shachar


I just had a thought as well. On Linux/OSX/etc, dmd uses fork() and then calls gcc to do linking.

When memory is never cleaned up, can't that make fork() really slow?
Doesn't fork copy all memory of the entire process?
Don't some benchmarks measure the total time including compiler invocation?

  Bit

Reply via email to