Re: [OMPI users] OpenMPI binding all tasks to cpu0, leaving cpu1 idle. (2-cpu system)
Hi, Miguel Figueiredo Mascarenhas Sousa Filipe wrote: Hi there, I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not specify a hostfile. Lately I'm having performance problems when running my mpi-app this way: mpiexec -n 2 ./mpi-app config.ini Both mpi-app processes are running on cpu0, leaving cpu1 idle. After reading the mpirun manpage, it seems that openmpi bind tasks to cpus in a round-robin way, meaning that this should not happen. But given my problem, I assume that it's not detecting this is a 2-way smp system, (assuming a UP system) and binding both tasks to cpu0.. Is this correct? By default I do not think Open MPI does any process affinity (although I could be wrong). See this FAQ for information on process affinity: http://www.open-mpi.org/faq/?category=tuning#paffinity-defs http://www.open-mpi.org/faq/?category=tuning#using-paffinity The openmpi-default-hostfile says I should not specify localhost in there.. and let the job dispatcher/rca "detect" the single-node setup. Where should I define/configure system wide, that this is a single-node, 2-slot system? I would like to avoid making the system users be obliged to pass a hostfile to mpirun/mpiexec. I simply want mpiexec -n N ./mpi-task to do the propper job of _really_ spreading the processes evenly between all the system's CPUs. Best regards, waiting for your answer. You could put localhost and specify the number of slots in the default hostfile, or just pass a hostfile containing local host to mpirun. By default Open MPI will run on the localhost assuming 1 slot if it does not detect a resource manager or isn't passed a hostfile. ps.: should I upgrade to latest openMPI to have my problem "automagically" solved? I would definitely update to a newer version. The 1.1 series has many problems. Hope this helps, Tim
Re: [OMPI users] OpenMPI binding all tasks to cpu0, leaving cpu1 idle. (2-cpu system)
-c, -np, --np, -n, --n all do exactly the same thing. Tim Miguel Figueiredo Mascarenhas Sousa Filipe wrote: Hi, On 10/3/07, jodywrote: Hi Miguel I don't know if it's a typo - but actually it should be mpiexec -np 2 ./mpi-app config.ini and not mpiexec -n 2 ./mpi-app config.ini thanks for the remark, you're right, but in the man page says -n is a synonym for -np Kind regards,
Re: [OMPI users] OpenMPI binding all tasks to cpu0, leaving cpu1 idle. (2-cpu system)
Hi, On 10/3/07, jodywrote: > Hi Miguel > I don't know if it's a typo - but actually it should be > mpiexec -np 2 ./mpi-app config.ini > and not > > mpiexec -n 2 ./mpi-app config.ini thanks for the remark, you're right, but in the man page says -n is a synonym for -np Kind regards, -- Miguel Sousa Filipe
Re: [OMPI users] OpenMPI binding all tasks to cpu0, leaving cpu1 idle. (2-cpu system)
Hi Miguel I don't know if it's a typo - but actually it should be mpiexec -np 2 ./mpi-app config.ini and not > mpiexec -n 2 ./mpi-app config.ini Jody
[OMPI users] OpenMPI binding all tasks to cpu0, leaving cpu1 idle. (2-cpu system)
Hi there, I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not specify a hostfile. Lately I'm having performance problems when running my mpi-app this way: mpiexec -n 2 ./mpi-app config.ini Both mpi-app processes are running on cpu0, leaving cpu1 idle. After reading the mpirun manpage, it seems that openmpi bind tasks to cpus in a round-robin way, meaning that this should not happen. But given my problem, I assume that it's not detecting this is a 2-way smp system, (assuming a UP system) and binding both tasks to cpu0.. Is this correct? The openmpi-default-hostfile says I should not specify localhost in there.. and let the job dispatcher/rca "detect" the single-node setup. Where should I define/configure system wide, that this is a single-node, 2-slot system? I would like to avoid making the system users be obliged to pass a hostfile to mpirun/mpiexec. I simply want mpiexec -n N ./mpi-task to do the propper job of _really_ spreading the processes evenly between all the system's CPUs. Best regards, waiting for your answer. ps.: should I upgrade to latest openMPI to have my problem "automagically" solved? -- Miguel Sousa Filipe