We are starting to play with FCA on our Mellonox based IB fabric.
I noticed from ompi_info that FCA support for a lot of collectives are disabled
by default:
Any idea why only barrier/bcast/reduce are on by default and all the more
complex values are disabled?
MCA coll: parame
On 4/2/13 11:03 PM, Gus Correa wrote:
On 04/02/2013 11:40 AM, Duke Nguyen wrote:
On 3/30/13 8:46 PM, Patrick Bégou wrote:
Ok, so your problem is identified as a stack size problem. I went into
these limitations using Intel fortran compilers on large data problems.
First, it seems you can incre
On 4/2/13 10:45 PM, Ralph Castain wrote:
Hmmm...tell you what. I'll add the ability for OMPI to set the limit to a
user-specified level upon launch of each process. This will give you some
protection and flexibility.
That would be excellent ;)
I forget, so please forgive the old man's fadi
On 04/02/2013 11:40 AM, Duke Nguyen wrote:
On 3/30/13 8:46 PM, Patrick Bégou wrote:
Ok, so your problem is identified as a stack size problem. I went into
these limitations using Intel fortran compilers on large data problems.
First, it seems you can increase your stack size as "ulimit -s
unlim
Hello:
I'm trying to produce a performance model for a piece of software, and I'd
like to know the expected performance behavior of MPI_Scatter and
MPI_Gather as the number of processors increases. I've searched to no
avail for a publication on this topic.
Can you point me in the direction of so
Hmmm...tell you what. I'll add the ability for OMPI to set the limit to a
user-specified level upon launch of each process. This will give you some
protection and flexibility.
I forget, so please forgive the old man's fading memory - what version of OMPI
are you using? I'll backport a patch for
On 3/30/13 8:46 PM, Patrick Bégou wrote:
Ok, so your problem is identified as a stack size problem. I went into
these limitations using Intel fortran compilers on large data problems.
First, it seems you can increase your stack size as "ulimit -s
unlimited" works (you didn't enforce the system
On 4/2/13 6:50 PM, Reuti wrote:
Hi,
Am 30.03.2013 um 14:46 schrieb Patrick Bégou:
Ok, so your problem is identified as a stack size problem. I went into these
limitations using Intel fortran compilers on large data problems.
First, it seems you can increase your stack size as "ulimit -s unli
On 4/2/13 6:42 PM, Reuti wrote:
/usr/local/bin/mpirun -npernode 1 -tag-output sh -c "ulimit -a"
You are right :)
$ /usr/local/bin/mpirun -npernode 1 -tag-output sh -c "ulimit -a"
[1,0]:core file size (blocks, -c) 0
[1,0]:data seg size (kbytes, -d) unlimited
[1,0]:scheduling
Hi,
Am 30.03.2013 um 15:35 schrieb Gustavo Correa:
> On Mar 30, 2013, at 10:02 AM, Duke Nguyen wrote:
>
>> On 3/30/13 8:20 PM, Reuti wrote:
>>> Am 30.03.2013 um 13:26 schrieb Tim Prince:
>>>
On 03/30/2013 06:36 AM, Duke Nguyen wrote:
> On 3/30/13 5:22 PM, Duke Nguyen wrote:
>> On 3
Hi,
Am 30.03.2013 um 14:46 schrieb Patrick Bégou:
> Ok, so your problem is identified as a stack size problem. I went into these
> limitations using Intel fortran compilers on large data problems.
>
> First, it seems you can increase your stack size as "ulimit -s unlimited"
> works (you didn't
Hi,
Am 02.04.2013 um 13:22 schrieb Duke Nguyen:
> On 4/1/13 9:20 PM, Ralph Castain wrote:
>> It's probably the same problem - try running 'mpirun -npernode 1 -tag-output
>> ulimit -a" on the remote nodes and see what it says. I suspect you'll find
>> that they aren't correct.
>
> Somehow I co
On 4/1/13 9:20 PM, Ralph Castain wrote:
It's probably the same problem - try running 'mpirun -npernode 1 -tag-output ulimit
-a" on the remote nodes and see what it says. I suspect you'll find that they
aren't correct.
Somehow I could not run your advised CMD:
$ qsub -l nodes=4:ppn=8 -I
qsub
Dear all,
I am new in OpenMPI and writing a parallel processing program using openmpi in
C++ language. I would like to use the function MPI_Allreduce() but my sendbuf
and recvbuf datatype are pointers/arrays (2D).
Is it possible to pass in and out the pointers/arrays using the MPI_Allreduce()
14 matches
Mail list logo