On 20 Feb 2004, john moser <[EMAIL PROTECTED]> wrote: > try reading it again, from the beginning. > > Let me try it like this. > > DISTRIBUTED COMPUTING NETWORK: 3700 NODES > NODE X: 1.5 Ghz Processor > TIME TO PROCESS 5 PARALLEL PREPROCESSINGS: 45 SECONDS > NODES USED: 5 > TIME TO PROCESS 10 PARALLEL PREPROCESSINGS: 130 SECONDS > NODES USED: 10 > TIME TO PROCESS 3700 PARALLEL PREPROCESSINGS: 49 DAYS, 6 HOURS, 15 MINUTES, 27 > SECONDS > NODES USED: about 1-2 at a time, as the preprocessings slowly finish on those > last 3 days and get sent out at random times. > > You can NOT do as many preprocesses in parallel as you have nodes sometimes. > To MAXIMIZE efficienty, you need to specify -j$NUMBEROFNODES and LOCK the > number of parallel preprocessing operations to a lower number. Then, WHILE > one node is compiling a complex source file, you can preprocess AND send out > another job, possibly BEFORE that one finishes. > > Simple enough? The idea is to get the job OFF the box ASAP so it can come > back FINISHED ASAP.
Just to be clear about why this is particularly silly: it is true that your machine will not be able to get off 3700 preprocessing jobs in a reasonable amount of time. However, it won't be able to run any 3700 jobs in parallel. It's not like there is some other aspect of the makefile where it ought to run 3700 jobs. In fact make probably would not manage to issue that many jobs even if it tried. In general it is good to use about 2*NCPUS as the -j limit. Do you actually have 3700 nodes, anyhow? > Now, THNK this time, before you incur my wrath again. Please, trolls are not welcome here. -- Martin
signature.asc
Description: Digital signature
__ distcc mailing list http://distcc.samba.org/ To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/distcc
