Thanks to all for the answers.

The package direction is precisely what I am trying to avoid. It is still not obvious to me how much work (how many trials) would be needed to decide on granularity, as well as how much work to automatize the decision to recompile a package or not ; and finally, when a given package needs to be recompiled for only one or a few files changed, most likely one would WAIT (much) more than with the current solution - and within a single process.

For the initial compilation, a quick try at the -c solution worked with ldmd2 (ldc2) on parts of the application. Then, I tried to feed it all 226 files and the compilation process ended with a segmentation fault. No idea why. The direct compilation with -i main.d works.

I was not aware of the options for Dub, many thanks!

Overall I am happy with any solution, even if there is an upfront cost at the first compilation, as long as it makes testing an idea FAST later on, and that probably can work better using all available cores.

Now about this:

On Wednesday, 8 January 2020 at 13:14:38 UTC, kinke wrote:
On Wednesday, 8 January 2020 at 04:40:02 UTC, Guillaume Lathoud
I wonder if some heuristic roughly along these lines - when enough source files and enough cores, do parallel and/or re-use - could be integrated into the compilers, at least in the form of an option.

I think that's something to be handled by a higher-level build system, not the compiler itself.

Fine... I don't want so much to debate where exactly this should be. Simply: having a one-liner solution (no install, no config file) delivered along with the compiler, or as a compiler option, could fill a sweet spot between a toy app (1 or 2 source files), and a more complex architecture relying on a package manager. This might remove a few obstacles to D usage. This is of course purely an opinion.

Reply via email to