On Sunday, 5 April 2015 at 00:22:35 UTC, Atila Neves wrote:
It seems to me that different projects might benefit from different compilation strategies. It might just be a case of unit tests alongside production code vs in separate files. As mentioned before, my experience with per-module compilation was usually faster, but I'm going to change the default to be per package.

I want to also share my experience in that regard.

When I was writing a vibe.d based application, I used dub as a build system, which sends everything in one go. My application was just a couple of files, so I was, practically, just building vibe every time.

I was developing the application on a desktop with 4 Gb RAM and everything was fine (albeit I was missing the "progress bar" of files in progress provided by ninja/make).

But then it was time to deploy the app, and I bought a 1 GB RAM virtual node from Linode. After executing dub it told me "Out of memory" and exited. And there was nothing I could do.

So I took the only option I saw - I switched to CMake (modified for working with D) to provide me a separate compilation build (ninja-based) and swore to never again.

I understand the reasoning behind both separate and "throw in everything" compilation strategies. And I also understand the pros of a middle-ground solution (like, per-package one), which is probably the way D will go. But this area seems kind of gray to me (like, in my case the "per-package" solution wouldn't work either, if I understand it correctly).

So, personally, I will probably stick to separate compilation, until I see that:

- The pros of "batch" compilation are clear and, desirably, obvious. At the moment it seems to me (seems to me), that faster compilation and attribute inference just don't have a significant impact. - There's a way to fine tune between "separate" and "throw in everything" compilation if necessary.


Reply via email to