Re: [OMPI users] Trouble running OpenMPI compiled for x86_64(either m32 or m64)

2010-07-29 Thread Beatty, Daniel D CIV NAVAIR, 474300D
Greetings Ralph, Thank you so much. I look forward to seeing the new OpenMPI 1.5.x with Xgrid support. There was a point say 10.6.2 and 10.5.8 where OpenMPI 1.4.1 and v1.2.9 both worked, with only a minor glitch. Since I have a 10.6.3 version, I can retry on that and see what results I get.

Re: [OMPI users] Trouble running OpenMPI compiled for x86_64 (either m32 or m64)

2010-07-29 Thread Ralph Castain
I'm afraid we were unable to support xgrid after the 1.2 series as no developer had access to an xgrid server. I recently received a complimentary copy of OSX-server from Apple, and I expect to restore xgrid support at some point in the 1.5 series. It looks like you are hitting some issue with 1.2

[OMPI users] Trouble running OpenMPI compiled for x86_64 (either m32 or m64)

2010-07-29 Thread Beatty, Daniel D CIV NAVAIR, 474300D
Greetings all, I am running into some trouble using OpenMPI with OSX 10.6.4 in a Kerberized XGrid environment. Note, I did not have this trouble before in the OSX 10.5.8 Kerberized XGrid environment. The pattern of this trouble is as follows: 1. User submits a mpi job entering "mpirun -np 4

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Ralph Castain
Afraid I can only reiterate: we don't support binding of individual threads to cores at this time. You can use bind-to-socket to constrain all threads from a process to a socket, so they can at least use those cores - but the threads will move around between the cores in that socket, and more threa

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread David Akin
I use the taskset command, or just use 'top' and watch the performance. On Thu, Jul 29, 2010 at 12:02 PM, Ralph Castain wrote: > I don't see anything in your code that would bind, but I also don't see > anything that actually tells you whether or not your are bound. It appears > that MPI_Get_pro

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Ralph Castain
I don't see anything in your code that would bind, but I also don't see anything that actually tells you whether or not your are bound. It appears that MPI_Get_processor_name is simply returning the name of the node as opposed to the name/id of any specific core. How do you know what core the th

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Terry Dontje
No problem, anyways I think you are headed in the right direction now. --td David Akin wrote: Sorry for the confusion. What I need is for all OpenMP threads to *not* stay on one core. I *would* rather each OpenMP thread to run on a separate core. Is it my example code? My gut reaction is no bec

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread David Akin
Sorry for the confusion. What I need is for all OpenMP threads to *not* stay on one core. I *would* rather each OpenMP thread to run on a separate core. Is it my example code? My gut reaction is no because I can manipulate (somewhat) the cores the threads are assigned by adding -bysocket -bind-to-s

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Terry Dontje
Ralph Castain wrote: On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote: Ralph Castain wrote: How are you running it when the threads are all on one core? If you are specifying --bind-to-core, then of course all the threads will be on one core since we bind the process (not the thread). If you

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Ralph Castain
If you check, I expect you will find that your threads and processes are not bound to a core, but are now constrained to stay within a socket. This means that if you run more threads than cores in a socket, you will see threads idled due to contention. On Jul 29, 2010, at 8:29 AM, David Akin w

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread David Akin
Adding -bysocket -bind-to-socket worked! Now to figure out why that is? I also assumed it was my code. You can try my simple example code below. On Thu, Jul 29, 2010 at 8:49 AM, Ralph Castain wrote: > > On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote: > > Ralph Castain wrote: > > How are you runn

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Ralph Castain
On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote: > Ralph Castain wrote: >> >> How are you running it when the threads are all on one core? >> >> If you are specifying --bind-to-core, then of course all the threads will be >> on one core since we bind the process (not the thread). If you are >

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread Terry Dontje
Ralph Castain wrote: How are you running it when the threads are all on one core? If you are specifying --bind-to-core, then of course all the threads will be on one core since we bind the process (not the thread). If you are specifying -mca mpi_paffinity_alone 1, then the same behavior result

Re: [OMPI users] Hybrid OpenMPI / OpenMP run pins OpenMP threads to a single core

2010-07-29 Thread David Akin
Below are all places that could contain mca related settings. grep -i mca /usr/mpi/gcc/openmpi-1.4-qlc/etc/openmpi-mca-params.conf # This is the default system-wide MCA parameters defaults file. # Specifically, the MCA parameter "mca_param_files" defaults to a # "$HOME/.openmpi/mca-params.conf:$sy