On 03/01/10 11:51, Ralph Castain wrote:
On Mar 1, 2010, at 8:41 AM, David Turner wrote:
On 3/1/10 1:51 AM, Ralph Castain wrote:
Which version of OMPI are you using? We know that the 1.2 series was unreliable
about removing the session directories, but 1.3 and above appear to be quite
good ab
On Mar 1, 2010, at 10:04 AM, David Turner wrote:
> Hi Ralph,
>
>> Which version of OMPI are you using? We know that the 1.2 series was
>> unreliable about removing the session directories, but 1.3 and above appear
>> to be quite good about it. If you are having problems with the 1.3 or 1.4
>>
Hi Ralph,
Which version of OMPI are you using? We know that the 1.2 series was unreliable
about removing the session directories, but 1.3 and above appear to be quite
good about it. If you are having problems with the 1.3 or 1.4 series, I would
definitely like to know about it.
When I was at
On Mar 1, 2010, at 8:41 AM, David Turner wrote:
> On 3/1/10 1:51 AM, Ralph Castain wrote:
>> Which version of OMPI are you using? We know that the 1.2 series was
>> unreliable about removing the session directories, but 1.3 and above appear
>> to be quite good about it. If you are having proble
On 3/1/10 1:51 AM, Ralph Castain wrote:
Which version of OMPI are you using? We know that the 1.2 series was unreliable
about removing the session directories, but 1.3 and above appear to be quite
good about it. If you are having problems with the 1.3 or 1.4 series, I would
definitely like to
http://www.open-mpi.org/nightly/trunk/
I'm not sure this patch will solve your problem, but it is worth a try.
On Mar 1, 2010, at 3:51 AM, Federico Golfrè Andreasi wrote:
> Ok, thank you !
>
> where can I found instructions for download the developer's copy of OpenMPI,
> if it is possibile?
>
Hi,
The problem is not there.
I put a "free" and check for return value of malloc but still have the
segfault. (source code updated in attach)
I discovered that array size to send is limited to 64kB. If I send
8192 x double : it's ok. But more will cause segfault. I also changed
in order to send
Hello.
It looks like you allocate memory in every loop iteration on process #0
and doesn't free it so malloc fails on some iteration.
В Вск, 28/02/2010 в 19:22 +0100, TRINH Minh Hieu пишет:
> Hello,
>
> I have some problems running MPI on my heterogeneous cluster. More
> precisley i got segmentat
Ok, thank you !
where can I found instructions for download the developer's copy of OpenMPI,
if it is possibile?
I'd like to test it just to be sure that the problem is solved, with that
patch.
Can you let me know where that patch is available?
Thank you very much,
Federico
2010/2/27 Ralph
Which version of OMPI are you using? We know that the 1.2 series was unreliable
about removing the session directories, but 1.3 and above appear to be quite
good about it. If you are having problems with the 1.3 or 1.4 series, I would
definitely like to know about it.
When I was at LANL, I ran
Hi all,
Running on a large cluster of 8-core nodes. I understand
that the SM BTL is a "good thing". But I'm curious about
its use of memory-mapped files. I believe these files will
be in $TMPDIR, which defaults to /tmp.
In our cluster, the compute nodes are stateless, so /tmp
is actually in R
11 matches
Mail list logo