Hi Jack,
On Saturday 31 January 2009 02:03:23 pm Jack Bryan wrote:
> I am installing BLACS in order to install PCSDP - a parallell interior
> point solver for linear programming.
>
> I need to install it on Open MPI 1.2.3 platform.
>
> I ahve installed BLAS, LAPACK successfully.
>
> Now I need to i
Is there anyone else who experienced this problem with a HEL-based
distro that can upgrade to 5.3 to confirm my experience?
--
Prentice
Prentice Bisbal wrote:
> No. I was running just a simple "Hello, world" program to test v1.3 when
> these errors occured. And as soon as I reverted to 1.2.8, th
No. I was running just a simple "Hello, world" program to test v1.3 when
these errors occured. And as soon as I reverted to 1.2.8, the errors
disappeared.
Interestingly enough, I just upgraded my cluster to PU_IAS 5.3, and now
I can't reproduce the problem but HPL fails with a segfault, which I'll
I am using openmpi to run a job on 4 nodes, 2 processors per node. Seems
like 5 out of the 8 processors executed the app successfully and 3 of them
did not. Here is the error message I got. The last thing I did in the code
is an MPI_Barrier call and it never returns (probably because 3 of the
pro
On 02/02/09 06:12, Reuti wrote:
Am 02.02.2009 um 11:31 schrieb Sangamesh B:
On Mon, Feb 2, 2009 at 12:15 PM, Reuti
wrote:
Am 02.02.2009 um 05:44 schrieb Sangamesh B:
On Sun, Feb 1, 2009 at 10:37 PM, Reuti
wrote:
Am 01.02.2009 um 16:00 schrieb Sangamesh B:
On Sat, Jan 31, 2009 at 6:27 P
Jeff,
thanks a lot for taking the time to look at my file and sorry for not
having noticed that part of the README, it went straight past me.
Anyway with your suggestion it works perfectly.
Thanks again, Daniel.
* Jeff Squyres [02/01/2009 06:49]:
> It looks like you compiled Open MPI against t
Hi Jeff and Tony,
I just tried 1.2.2 on my Itanium and got the same error.
Our code uses 1.2.2, but I built it back in 2007 with
V8 of the compiler. Now I am using V10.1. So, perhaps
the code is incompatible with Intel V10.1.
Joe
> -Original Message-
> From: users-boun...@open-mpi.
On Feb 2, 2009, at 2:55 AM, jody wrote:
Hi Ralph
The new options are great stuff!
Following your suggestion, i downloaded and installed
http://www.open-mpi.org/nightly/trunk/openmpi-1.4a1r20392.tar.gz
and tested the new options. (i have a simple cluster of
8 machines over tcp). Not everything
Okay, I have this fixed and the man page updated as of r20396.
Thanks again for finding and reporting this bug!
Ralph
On Feb 2, 2009, at 5:55 AM, Ralph Castain wrote:
Hmnmm...well, it shouldn't crash (so I'll have to fix that), but it
should fail. The --report-pid option takes an argument,
Great feedback - thanks!
I'll look into these and see what's going on. We had tested things on
a couple of systems, but it sounds like maybe there are some system-
specific issues we are encountering. Let me explore a little to see
what might be happening.
Ralph
On Feb 2, 2009, at 2:55 AM
Hmnmm...well, it shouldn't crash (so I'll have to fix that), but it
should fail. The --report-pid option takes an argument, which wasn't
provided here. I'll check the man page to ensure it is up-to-date.
What it should tell you is that --report-pid takes either a '-' to
indicate that the pi
Am 02.02.2009 um 11:31 schrieb Sangamesh B:
On Mon, Feb 2, 2009 at 12:15 PM, Reuti
wrote:
Am 02.02.2009 um 05:44 schrieb Sangamesh B:
On Sun, Feb 1, 2009 at 10:37 PM, Reuti marburg.de> wrote:
Am 01.02.2009 um 16:00 schrieb Sangamesh B:
On Sat, Jan 31, 2009 at 6:27 PM, Reuti marburg.de>
w
On Mon, Feb 2, 2009 at 12:15 PM, Reuti wrote:
> Am 02.02.2009 um 05:44 schrieb Sangamesh B:
>
>> On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote:
>>>
>>> Am 01.02.2009 um 16:00 schrieb Sangamesh B:
>>>
On Sat, Jan 31, 2009 at 6:27 PM, Reuti
wrote:
>
> Am 31.01.2009 um 08:49 schrie
Hi Ralph
one more thing i noticed while trying out orte_iof again.
The option --report-pid crashes mpirun:
[jody@localhost neander]$ mpirun -report-pid -np 2 ./MPITest
[localhost:31146] *** Process received signal ***
[localhost:31146] Signal: Segmentation fault (11)
[localhost:31146] Signal code:
Hi Ralph
The new options are great stuff!
Following your suggestion, i downloaded and installed
http://www.open-mpi.org/nightly/trunk/openmpi-1.4a1r20392.tar.gz
and tested the new options. (i have a simple cluster of
8 machines over tcp). Not everything worked as specified, though:
* timestamp-ou
Am 01.02.2009 um 12:43 schrieb Jeff Squyres:
Could the nodes be running out of shared memory and/or temp
filesystem space?
I still have this issue, and it happens only from time to time. But
despite the fact that SGE's qrsh is used automatically, more severe
is the fact, that on the slave
Am 02.02.2009 um 05:44 schrieb Sangamesh B:
On Sun, Feb 1, 2009 at 10:37 PM, Reuti
wrote:
Am 01.02.2009 um 16:00 schrieb Sangamesh B:
On Sat, Jan 31, 2009 at 6:27 PM, Reuti marburg.de> wrote:
Am 31.01.2009 um 08:49 schrieb Sangamesh B:
On Fri, Jan 30, 2009 at 10:20 PM, Reuti marburg.de>
17 matches
Mail list logo