Fernando de Oliveira wrote:

What I understand is that building one package with n simultaneous jobs
is equivalent to building in parallel n different parts of a package,
code, parallel build. Feels to me as equivalent to building n packages
in parallel, each using one job. In both cases, much care is necessary
when creating a Makefile or when selecting the packages, for parallel
build, there is always the risk that one job needs the result of another
one, again, in both cases.

I think you have the concept right. The tricky part of creating a Makefile is to ensure all dependencies are specified. Internally, make creates a tree. If multiple tasks are ready, e.g. all dependencies satisfied, make can launch a new task without waiting for the previous one to complete.

I once did 'make -j' on a package, perhaps gcc, but I don't remember for sure. I looked at top and it said the load was 170 or so. Needless to say, the system bogged down, primarily due to swapping.

I also have a fair amount of experience writing parallel code for a supercomputer. It used openmp for message passing. My programs used up to 1024 cores. Typically the cores were grouped at 16 per motherboard. Whether a core was on the same motherboard or not was rarely considered in my application, however there are versions of make that use openmp for true parallel builds. Doing that is considered one job.

Just to complete the discussion, I'll note that in a supercomputer environment there are two times to consider: queue time and execution time. Often queue time (hours) exceeded execution time so a multi-system parallel make really only is effective when you have dedicated resources and can eliminate the queue time.

  -- Bruce

--
http://lists.linuxfromscratch.org/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to