>> I bet against our package set being buildable in 2 hours — because of >> time-critical path likely hitting some non-parallelizable package. >> > >I think most large projects can be compiled via distcc, which means that >all you need is parallel make.
WebKitGTK… (there is a comment about failure to make it work with parallel build) >Libreoffice build is inherently a single-machine task, so to speed it >> up you need something like two octocore CPUs in the box. >> > >Point in case: >https://wiki.documentfoundation.org/Development/BuildingOnLinux#distcc_.2F_Icecream >. Building with "icecream" defaults to 10 parallel builds. > >Also, with ccache the original build time of 1.5 hours (no java/epm) is >reduced to 10 minutes on subsequent runs. How would ccache cache be managed for that? How it would work with rented instances being network-distant from each other? >> With such a goal, we would need to recheck all the dependency paths and >optimise the bottlenecks. >Sounds good :) We have too little manpower for timely processing of pull requests. I think that starting a huge project should be done with full knowledge that it can fail just because it needs too much energy. >Maybe making dependency replacement work reliably (symlinking into >> a special directory and referring to this directory?) is more feasible… > >Can you elaborate? One of the bruteforce ways is just to declare «we need reliable global dependency rewriting». In this case we could just have a symlink for every package ever used as dependency so a replacement would mean destructively changing this symlink. I.e. you depend on /nix/store/aaa-bash, there is a symlink /nix/store/aab-bash to /nix/store/aaa-bash, and the builder sees just /nix/store/aab-bash. _______________________________________________ nix-dev mailing list [email protected] http://lists.science.uu.nl/mailman/listinfo/nix-dev
