The changes Jeff mentioned are not in the 1.3 branch - not sure if they will
come over there or not.
I'm a little concerned in this thread that someone is reporting the process
affinity binding changing - that shouldn't be happening, and my guess is
that something outside of our control may be cha
On Jun 3, 2009, at 11:40 AM, Ashley Pittman wrote:
Wasn't there a discussion about this recently on the list, OMPI binds
during MPI_Init() so it's possible for memory to be allocated on the
wrong quad, the discussion was about moving the binding to the orte
process as I recall?
Yes. It's bee
On Wed, 2009-06-03 at 11:27 -0400, Jeff Squyres wrote:
> On Jun 3, 2009, at 10:48 AM, wrote:
>
> > For HPL, try writing a bash script that pins processes to their
> > local memory controllers using numactl before kicking off HPL. This
> > is particularly helpful when spawning more than 1 thr
nesday, June 03, 2009 10:27 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Openmpi and processor affinity
>
> On Jun 3, 2009, at 10:48 AM, wrote:
>
> > For HPL, try writing a bash script that pins processes to their
> > local memory controllers using numac
On Jun 3, 2009, at 10:48 AM, wrote:
For HPL, try writing a bash script that pins processes to their
local memory controllers using numactl before kicking off HPL. This
is particularly helpful when spawning more than 1 thread per
process. The last line of your script should look like "num
ailto:users-boun...@open-mpi.org] On
> Behalf Of Iftikhar Rathore
> Sent: Tuesday, June 02, 2009 10:25 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] Openmpi and processor affinity
>
> Guss
> Thanks for the reply and it was a typo (Im sick). I have updated to
> 1.3.2 since m
Guss
Thanks for the reply and it was a typo (Im sick). I have updated to
1.3.2 since my last post and have tried checking cpu affinity by using
f and j it shows processes spread on all 8 cores in the beginning, but
it does eventually shows all processes running on 0,
My P and Q's are made for a
On Jun 2, 2009, at 7:30 PM, Iftikhar Rathore -X (irathore - SIFY
LIMITED at Cisco) wrote:
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am
trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have
Hi Iftikhar
Iftikhar Rathore wrote:
Hi
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have lines that
look like "10.100.0.227 slots=8max_slots=8".
Hi
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have lines that
look like "10.100.0.227 slots=8max_slots=8".
Now when I use mpirun (even with --mc
10 matches
Mail list logo