I'm just running it using mpirun from the command line. Thanks for the
reply.
> On Thu, Sep 18, 2008 at 4:35 PM, John Hearns wrote:
>
>>
>>
>> 2008/9/18 Alex Wolfe
>>
>>> Hello,
>>>
>>> I am trying to run the HPL benchmarking software on a new 1024
2008/9/18 Alex Wolfe
> Hello,
>
> I am trying to run the HPL benchmarking software on a new 1024 core cluster
> that we have set up. Unfortunately I'm hitting the "mca_oob_tcp_accept:
> accept() failed: Too many open files (24)" error known in verson 1.2 of
> openmpi. No
On Sep 18, 2008, at 10:45 AM, Ethan Mallove wrote:
Ah, yeah, ok, now I see why you wouldl call it --mpi-install-
scratch, so
that it matches the MTT ini section name. Sure, that works for me.
Since this does seem like a feature that should eventually
propogate to all the other phases
Turns out you debugged the mpirun I was actually wanting you to attach to your program, PruebaSumaParalela.out, on both nodes and dump each of their stacks.
Is there a reason why you are using 1.2.2 instead of 1.2.7 or something from
the 1.3 branch? I am wondering if maybe there is some sort
Hello,
I am trying to run the HPL benchmarking software on a new 1024 core cluster
that we have set up. Unfortunately I'm hitting the "mca_oob_tcp_accept:
accept() failed: Too many open files (24)" error known in verson 1.2 of
openmpi. No matter what I set the file-descriptor limit for my account
Hello,
Here I enclose you the two files. I should tell you that before I did not
use the real IPs. The real ones are in the files.
Thanks again.
Sofia
- Original Message -
From: "Terry Dontje"
To:
Sent: Thursday, September 18, 2008 2:17
On Sep 18, 2008, at 10:08 AM, Tim Mattox wrote:
Can anyone check it quick for vpath builds?
I'll try to give it a whirl in a bit.
Just a FYI, I've already run into the "downside" I mentioned once
this week.
I had to rerun my MTT to get access to the build directory, since it
was on /tmp
I Guess I should comment on Jeff's comments too.
On Thu, Sep 18, 2008 at 9:00 AM, Jeff Squyres wrote:
> On Sep 16, 2008, at 12:07 PM, Ethan Mallove wrote:
>
>> What happens if one uses --local-scratch, but leaves out the
>> --scratch option? In this case, I think MTT should
OK, so how about calling it --mpi-build-scratch?
Once we get a consensus on what to call it, I can commit the patch to svn.
Can anyone check it quick for vpath builds?
Just a FYI, I've already run into the "downside" I mentioned once this week.
I had to rerun my MTT to get access to the build
Patch looks good. Please also update the CHANGES file (this file
reflects bullets for things that have happened since the core testers
branch).
On Sep 15, 2008, at 6:15 PM, Tim Mattox wrote:
Hello,
Attached is a patchfile for the mtt trunk that adds a
--local-scratch
option to
On Sep 16, 2008, at 12:07 PM, Ethan Mallove wrote:
What happens if one uses --local-scratch, but leaves out the
--scratch option? In this case, I think MTT should assume
--scratch is the same as --local-scratch.
In this case, my $0.02 is that it should be an error. --scratch
implies a
I just replied on the other thread.
On Sep 17, 2008, at 5:49 PM, Shafagh Jafer wrote:
ok i looked at the errors closely, it looks like that the problem is
from the "namespace MPI{.." in line 136 of "mpicxx.h" and every
where that this namespace (MPI) is used. here are the errors:
I believe that the problem is your "-DMPI" in your Makefile. The line
in question in mpicxx.h is:
namespace MPI {
When you use -DMPI, the preprocessor replaces this with:
namespace 1 {
which is not legal.
In short, the application using the name "MPI" is illegal. That name
is reserved
It might also be interesting to see the result of "ifconfig -a" on both
of your machines.
--td
Date: Thu, 18 Sep 2008 10:19:37 +0200
From: "Sofia Aparicio Secanellas"
Subject: Re: [OMPI users] Problem with MPI_Send and MPI_Recv
To: "Open MPI Users"
Hello Terry,
Finally, I have installed dbx. I enclose a file with the result that I get
when I type "dbx - PID of mpirun..." and then "where" on computer 10.4.5.123
.
Do you have any idea what could be the problem?
Thanks a lot!!
Sofia
- Original Message -
From: "Terry Dontje"
Ok, Thanks.
2008/9/17 Josh Hursey
> It looks like the configure script is picking up the wrong lib-directory
> (/home/osa/blcr/lib64 instead of /home/osa/blcr/lib):
> gcc -o conftest -O3 -DNDEBUG -finline-functions -fno-strict-aliasing
> -pthread \
>
Hello Terry,
Yes, "edu" is the user and 10.4.5.126 is the IP address. Because both
computers have different usernames, I think that I should write the username
otherwise it does not work. In fact, on the computer 10.4.5.123 I write:
mpirun -np 2 --host 10.4.5.123,edu@10.4.5.126 --prefix
Thanks a lot. The problem i have is that i have installed openmpi-1.2.7 and
every thing went well and i tested hello_c and ring_c. But the problem now is
that when i use open mpi's mpicc and mpic++ in my MakefileĀ i get errorsĀ
reported from inside openmpi's source code. I am attaching my
In OMPI these are binaries, not scripts. Not human readable.
[tjf@rscpc28 NH2+]$ ll /usr/local/bin/mpif90
lrwxrwxrwx 1 root root 12 2008-03-05 14:39 /usr/local/bin/mpif90 ->
opal_wrapper*
[tjf@rscpc28 NH2+]$ file /usr/local/bin/opal_wrapper
/usr/local/bin/opal_wrapper: ELF 32-bit LSB
I am trying to figure out a problem that i am stuck in :-( could anyone please
tell me how their mpicc/mpic++ looks like? is there any thing readable inside
these files?because mine look corrupted and are filled with unreadable
charachters.Please let me know.
Hi Josh!
First of all, thanks a lot for replying. :-)
When executing this checkpoint command, the running application
directly aborts, even though I did not specify the "--term" option:
--
mpirun noticed that process
21 matches
Mail list logo