Hi Jody,
Have you tried turning off Hyper-Threading with the Processor
Preference Pane?
The processor palette is include in the CHUD package when you
installed the developer tools. It lives in /Developer/Extras/
PreferencePanes; launch it and it will get added to the system
preferences.
Warner Yuen
Scientific Computing
Consulting Engineer
Apple, Inc.
email: wy...@apple.com
On Jul 11, 2009, at 9:00 AM, users-requ...@open-mpi.org wrote:
------------------------------
Message: 3
Date: Sat, 11 Jul 2009 07:56:08 -0700
From: Klymak Jody <jkly...@uvic.ca>
Subject: [OMPI users] Xgrid and choosing agents...
To: us...@open-mpi.org
Message-ID: <a6282054-7bcc-4261-9822-ad080b5a6...@uvic.ca>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Hi all,
Sorry in advance if these are naive questions - I'm not experienced in
running a grid...
I'm using openMPI on 4 duo Quad-core Xeon xserves. The 8 cores mimic
16 cores and show up in xgrid as each agent having 16 processors.
However, the processing speed goes down as the used processors exceeds
8, so if possible I'd prefer to not have more than 8 processors
working on each machine at a time.
Unfortunately, if I submit a 16-processor job to xgrid it all goes to
"xserve03". Or even worse, it does so if I submit two separate 8-
processor jobs. Is there anyway to steer jobs to less-busy agents?
I tried making a hostfile and then specifying the host, but I get:
/usr/local/openmpi/bin/mpirun -n 8 --hostfile hostfile --host
xserve01.local ../build/mitgcmuv
Some of the requested hosts are not included in the current allocation
for the
application:
../build/mitgcmuv
The requested hosts were:
xserve01.local
so I assume --host doesn't work with xgrid?
Is a reasonable alternative to simply not use xgrid and rely on ssh?
Thanks, Jody
--
Jody Klymak
http://web.uvic.ca/~jklymak
------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
End of users Digest, Vol 1285, Issue 2
**************************************