oops....sorry....it is in Intel MPI library. Thanks!!! On Fri, Mar 13, 2009 at 9:47 PM, Ralph Castain <r...@lanl.gov> wrote: > Hmmm...your comments don't sound like anything relating to Open MPI. Are you > sure you are not using some other MPI? > > Our mpiexec isn't a script, for example, nor do we have anything named > I_MPI_PIN_PROCESSOR_LIST in our code. > > :-) > > On Mar 13, 2009, at 4:00 AM, Peter Teoh wrote: > >> I saw the following problem posed somewhere - can anyone shed some >> light? Thanks. >> >> I have a cluster of 8-sock quad core systems running Redhat 5.2. It >> seems that whenever I try to run multiple MPI jobs to a single node >> all the jobs end up running on the same processors. For example, if I >> were to submit 4 8-way jobs to a single box they all end up in CPUs 0 >> to 7, leaving 8 to 31 idle. >> >> I then tried all sorts of I_MPI_PIN_PROCESSOR_LIST combinations but >> short of explicitly listing out the processors at each run, they all >> end up still hanging on to CPUs 0-7. Browsing through the mpiexec >> script, I realise that it is doing a taskset on each run. >> As my jobs are all submitted through a scheduler (PBS in this case) I >> cannot possibly know at job submission time which CPUs are not used. >> So is there a simple way to tell mpiexec to set the taskset affinity >> correctly at each run so that it will choose only the idle processors? >> Thanks. >> >> -- >> Regards, >> Peter Teoh >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >
-- Regards, Peter Teoh