On 17:55 Fri 01 Jun , Rayson Ho wrote:
> We posted an MPI quiz but so far no one on the Grid Engine list has
> the answer that Jeff was expecting:
>
> http://blogs.scalablelogic.com/
That link gives me an "error 503"?
--
==
Andreas S
We posted an MPI quiz but so far no one on the Grid Engine list has
the answer that Jeff was expecting:
http://blogs.scalablelogic.com/
Others have offered interesting points, and I just want to see if
people on the Open MPI list have the *exact* answer and the first one
gets a full Cisco Live C
We would rather that OpenMPI use shared-mem (sm) module when running
intra-node processes.
Doesn't PSM use shared memory to communicate between peers on the same
node?
Possibly, yes (I'm not sure). Even if it does it appears to consume a
'hardware context' for each peer - this is what we
On Jun 1, 2012, at 4:28 PM, Tom Harvill wrote:
> We would rather that OpenMPI use shared-mem (sm) module when running
> intra-node processes.
Doesn't PSM use shared memory to communicate between peers on the same node?
(that is hidden from us in Open MPI; I am *assuming* that internal to PSM,
On 06/01/2012 05:06 PM, Edmund Sumbar wrote:
Thanks for the tips Gus. I'll definitely try some of these, particularly
the nodes:ppn syntax, and report back.
You can check for torque support with
mpicc --showme
It should show among other things -ltorque [if it
has torque support] and -lrdmacm
MTL and BTL are mutually exclusive. If you use the psm MTL there is no way you
can take advantage of the sm BTl.
george.
On Jun 2, 2012, at 05:28 , Tom Harvill wrote:
>
> Hello,
>
> This is my first post, I've searched the FAQ for (what I think) are relative
> terms but am not finding an a
Thanks for the tips Gus. I'll definitely try some of these, particularly
the nodes:ppn syntax, and report back.
Right now, I'm upgrading the Intel Compilers and rebuilding Open MPI.
On Fri, Jun 1, 2012 at 2:39 PM, Gus Correa wrote:
> The [Torque/PBS] syntax '-l procs=48' is somewhat troublesom
Hi Edmund
The [Torque/PBS] syntax '-l procs=48' is somewhat troublesome,
and may not be understood by the scheduler [It doesn't
work correctly with Maui, which is what we have here. I read
people saying it works with pbs_sched and with Moab,
but that's hearsay.]
This issue comes back very often
Hello,
This is my first post, I've searched the FAQ for (what I think) are
relative terms but am not finding an answer to my question.
We have several dozen 32-core clustered worker nodes interconnected with
QLogic infiniband. Each node has two QLogic QLE7340 HCAs. As I
understand QLogic'
>On Fri, Jun 1, 2012 at 2:36 PM, Jeff Squyres wrote:
Hi Jeff,
Thanks for the prompt response, much appreciated.
This problem originally came up because we put the Open MPI tree into
our source control tool. Then make modifications, apply patches, etc.
Of course, when we check out the files, all
On Jun 1, 2012, at 2:17 PM, livelfs wrote:
>> Just curious -- do you know if there's a way I can make an
>> RHEL5-friendly SRPM on my RHEL6 cluster? I seem to have RPM 4.8.0 on my
>> RHEL6 >machines.
>>
> As I mentioned in my previous mail, what about testing
>
> rpmbuild --define "_source_file
On Jun 1, 2012, at 2:16 PM, Jeremy wrote:
> However, if I do an intermediate copy of the opempi-1.6 directory then
> make fails (details attached):
> tar xvf openmpi-1.6.tar
> cp -r openmpi-1.6 openmpi-1.6.try
^^Yeah, don't do that. :-)
Open MPI, like all Automake-bootstrapped tools, has a very
Hi,
I am having trouble building Open MPI 1.6 with RHEL 6.2 and gcc from a
copy of the openmpi-1.6 directory.
Everything works OK if I do a simple build like:
tar xvf openmpi-1.6.tar
cd openmpi-1.6
configure --prefix=/opt/local/mpi
make
make install
However, if I do an intermediate copy of the o
On 06/01/2012 02:25 PM, livelfs wrote:
On May 31, 2012, at 2:04 AM, livelfs wrote:
Since 1.4.5 openmpi release, it is no longer possible to build
openmpi binary with rpmbuild --rebuild if system rpm package version is
4.4.x, like >in SLES10, SLES11, RHEL/CentOS 5.x.
For instance, on CentOS 5.8
NetPIPE is a 2 process, pt2pt benchmark.
Run it with 2 processes both on one node, and then 2 processes with each on
different nodes.
On Jun 1, 2012, at 12:10 PM, Mudassar Majeed wrote:
> Dear Jeff,
> Can you suggest me a quick guide that can help me testing
> specifically
Dear Jeff,
Can you suggest me a quick guide that can help me testing
specifically the across and within node communication. I have some submission
today so have no time for googling. If the benchmark tells me the right thing
then I will do something accordingly.
best regards,
On Jun 1, 2012, at 11:05 AM, Mudassar Majeed wrote:
> Running with enabled shared memory gave me the following error.
>
> mpprun INFO: Starting openmpi run on 2 nodes (16 ranks)...
> --
> A requested component was not found,
On Fri, Jun 1, 2012 at 8:09 AM, Jeff Squyres wrote:
> It's been a lng time since I've run under PBS, so I don't remember if
> your script's environment is copied out to the remote nodes where your
> application actually runs.
>
> Can you verify that PATH and LD_LIBRARY_PATH are the same on al
Running with enabled shared memory gave me the following error.
mpprun INFO: Starting openmpi run on 2 nodes (16 ranks)...
--
A requested component was not found, or was unable to be opened. This
means that this component is
...and exactly how you measured. You might want to run a well-known benchmark,
like NetPIPE or the OSU pt2pt benchmarks.
Note that the *first* send between any given peer pair is likely to be slow
because OMPI does a lazy connection scheme (i.e., the connection is made behind
the scenes). Sub
This should not happen. Typically, Intra node communication latency are way way
cheaper than inter node.
Can you please tell us how u ran your application ?
Thanks
--
Sent from my iPhone
On Jun 1, 2012, at 7:34 AM, Mudassar Majeed wrote:
> Dear MPI people,
>Ca
Dear MPI people,
Can someone tell me why MPI_Ssend takes more
time when two MPI processes are on same node .. ?? the same two processes
on different nodes take much less time for the same message exchange. I am
using a supercomputing center and this happens.
On Jun 1, 2012, at 10:03 AM, Edmund Sumbar wrote:
> I ran the following PBS script with "qsub -l procs=128 job.pbs". Environment
> variables are set using the Environment Modules packages.
>
> echo $HOSTNAME
> which mpiexec
> module load library/openmpi/1.6-intel
This *may* be the problem here.
On Fri, Jun 1, 2012 at 5:00 AM, Jeff Squyres wrote:
> Try running:
>
> which mpirun
> ssh cl2n022 which mpirun
> ssh cl2n010 which mpirun
>
> and
>
> ldd your_mpi_executable
> ssh cl2n022 which mpirun
> ssh cl2n010 which mpirun
>
> Compare the results and ensure that you're finding the same mpiru
Understood; I asked Denis to re-install and send all the standard info (output
from configure, make, etc.).
On Jun 1, 2012, at 9:53 AM, Aurélien Bouteiller wrote:
> Jeff,
>
> I had the same issue happening to me. It is a pretty new thing, I think it
> goes back to about a month ago. I thoug
Jeff,
I had the same issue happening to me. It is a pretty new thing, I think it goes
back to about a month ago. I thought it was a freak event from stall configure
files and didn't reported. Now that others are experiencing it too, there might
be something to investigate there.
Aurelien
If you could send all the info from your original build (or re-do it again
without --enable-binaries) listed here, that would be helpful:
http://www.open-mpi.org/community/help/
Thanks!
On Jun 1, 2012, at 9:32 AM, denis cohen wrote:
> I have not tried separating the two options. the --hel
I have not tried separating the two options. the --help in configure
indicates that --enable-binaries is enabled by default;
--with-devel-headers isn't. Not a specialist myself so don't know the
innards of openmpi. Only thing I know is that it's now working for me.
Can try something is that would
On Jun 1, 2012, at 8:20 AM, Aurélien Bouteiller wrote:
> You need to pass the following option to configure:
> --with-devel-headers --enable-binaries
>
> I don't know exactly why the default is not to build them anymore, this is a
> bit confusing.
That is not correct -- the default is to bui
Thanks Aurelien.
This fixed it.
Denis
On Fri, Jun 1, 2012 at 2:20 PM, Aurélien Bouteiller
wrote:
> You need to pass the following option to configure:
> --with-devel-headers --enable-binaries
>
> I don't know exactly why the default is not to build them anymore, this is a
> bit confusing.
>
>
You need to pass the following option to configure:
--with-devel-headers --enable-binaries
I don't know exactly why the default is not to build them anymore, this is a
bit confusing.
Aurelien
Le 1 juin 2012 à 04:16, denis cohen a écrit :
> Hello,
>
> I am trying to install openmpi-1.6 usi
Try running:
which mpirun
ssh cl2n022 which mpirun
ssh cl2n010 which mpirun
and
ldd your_mpi_executable
ssh cl2n022 which mpirun
ssh cl2n010 which mpirun
Compare the results and ensure that you're finding the same mpirun on all
nodes, and the same libmpi.so on all nodes. There may well be ano
Can you send the information listed here:
http://www.open-mpi.org/community/help/
On Jun 1, 2012, at 4:16 AM, denis cohen wrote:
> Hello,
>
> I am trying to install openmpi-1.6 using the Intel compilers.
> Looks like everything went fine but there was no mpicc mpif90, mpic++,
> creat
Hello,
I am trying to install openmpi-1.6 using the Intel compilers.
Looks like everything went fine but there was no mpicc mpif90, mpic++,
created during the installation process.
I made links to opal-wrapper (and also orterun for mpirun) but then
when trying to compile the examples/hello_c
34 matches
Mail list logo