On Jul 23, 2007, at 6:43 AM, Biagio Cosenza wrote:
I'm working on a parallel real time renderer: an embarassing
parallel problem where latency is the threshold to high perfomance.
Two observations:
1) I did a simple "ping-pong" test (the master does a Bcast + an
IRecv for each node + a
Hi,
running conventional TCP/IP all is safe AFAICS - all processes will
be killed on all involved nodes. The problem arises with OFED, with
which we also have this behavior using MVAPICH.
Unfortunately we have only a limited number of nodes with InfiniBand,
and hence time to test and
It *should* work. We stopped developing for the Cisco (mVAPI) stack
a while ago, but as far as we know, it still works fine. See:
http://www.open-mpi.org/faq/?category=openfabrics#vapi-support
That being said, your approach of "it ain't broke, don't fix it" is
certainly quite
openmpi-1.2.3 compiled on Debian Linux amd64 etch with
./configure CC=/opt/intel/cce/9.1.042/bin/icc
CXX=/opt/intel/cce/9.1.042/bin/icpc F77=/opt/intel/fce/9.1.036/bin/ifort
FC=/opt/intel/fce/9.1.036/bin/ifort --with-libnuma=/usr/lib
ompi_info |grep libnuma
ompi_info |grep maffinity
reported
Hi Henk,
SLIM H.A. wrote:
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use such a
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use such a
parameter to mpirun on our myrinet
Thanks, Brian. That did the trick.
-Ken
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Brian Barrett
> Sent: Thursday, July 19, 2007 3:39 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_File_set_view rejecting subarray
Yes...it would indeed.
On 7/23/07 9:03 AM, "Kelley, Sean" wrote:
> Would this logic be in the bproc pls component?
> Sean
>
>
> From: users-boun...@open-mpi.org on behalf of Ralph H Castain
> Sent: Mon 7/23/2007 9:18 AM
> To: Open MPI Users
>
Would this logic be in the bproc pls component?
Sean
From: users-boun...@open-mpi.org on behalf of Ralph H Castain
Sent: Mon 7/23/2007 9:18 AM
To: Open MPI Users
Subject: Re: [OMPI users] orterun --bynode/--byslot problem
No, byslot
Good morning all,
I have been very impressed so far with OpenMPI on one of our smaller
clusters running Gnu compilers and Gig-E interconnects, so I am
considering a build on our large cluster. The potential problem is that
the compilers are Intel 8.1 versions and the Infiniband is supported
Hi Henk,
The sge script should not require any extra parameter. The qdel command
should send the kill signal to mpirun and also remove the SGE allocated
tmp directory (in something like /tmp/174.1.all.q/) which contains the
OMPI session dir for the running job, and in turns would cause orted
> > From: Jeff Squyres
> >
> > Can you be a bit more specific than "it dies"? Are you talking about
> > mpif90/mpif77, or your app?
>
> Sorry, tuspid me. When executing mpif90 or mpif77 I have a segfault and it
> doesn't compile. I've tried both with or without input (i.e.,
No, byslot appears to be working just fine on our bproc clusters (it is the
default mode). As you probably know, bproc is a little strange in how we
launch - we have to launch the procs in "waves" that correspond to the
number of procs on a node.
In other words, the first "wave" launches a proc
Hi
I am in the process of moving a parallel program from our old 32 bit based
(Xeon @ 2.8 GHz) Linux cluster to a new EM64T (Intel Xeon 5160 @ 3.00GHz)
base linux cluster.
OS on the old cluster is Redhat 9 and Fedora 7 on the new cluster.
I have installed the Intel Fortran compiler
Hello,
I'm working on a parallel real time renderer: an embarassing parallel
problem where latency is the threshold to high perfomance.
Two observations:
1) I did a simple "ping-pong" test (the master does a Bcast + an IRecv for
each node + a Waitall) similar to effective renderer workload.
Call for Participation: EuroPVM/MPI'07
http://www.pvmmpi07.org
Please join us for the 14th European PVM/MPI Users' Group
conference, which will be held in Paris, France from
September 30 to October 3. This conference is a forum for
the discussion and presentation of recent
I am using OpenMPI 1.2.3 with SGE 6.0u7 over InfiniBand (OFED 1.2),
following the recommendation in the OpenMPI FAQ
http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge
The job runs but when the user wants to delete the job with the qdel
command, this fails. Does the mpirun command
17 matches
Mail list logo