On Jun 2, 2009, at 7:30 PM, Iftikhar Rathore -X (irathore - SIFY
LIMITED at Cisco) wrote:
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am
trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have
Hi Iftikhar
Iftikhar Rathore wrote:
Hi
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have lines that
look like "10.100.0.227
Hi
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores) and all hosts in the hostfile have lines that
look like "10.100.0.227 slots=8max_slots=8".
Now when I use mpirun (even with
Hi,
Thank you so much. Well, the memory is enough. As I said, the jobs run
and the whole process is actually done without complaining about memory,
but they are not ended up correctly. I first tries to solve this using
this algorithm:
1. all processes except root will wait before MPI_Finalize
On Tue, 2009-06-02 at 12:27 -0400, Jeff Squyres wrote:
> On Jun 2, 2009, at 11:37 AM, Allen Barnett wrote:
>
> > std::stringstream ss;
> > ss << "partitioner_program " << COMM_WORLD_SIZE;
> > system( ss.str().c_str() );
> >
>
> You'd probably see the same problem even if you strdup'ed the
On Jun 2, 2009, at 11:37 AM, Allen Barnett wrote:
std::stringstream ss;
ss << "partitioner_program " << COMM_WORLD_SIZE;
system( ss.str().c_str() );
You'd probably see the same problem even if you strdup'ed the c_str()
and system()'ed that.
What kernel are you using? Does OMPI say that
On Tue, 2009-05-19 at 08:29 -0400, Jeff Squyres wrote:
> fork() support in OpenFabrics has always been dicey -- it can lead to
> random behavior like this. Supposedly it works in a specific set of
> circumstances, but I don't have a recent enough kernel on my machines
> to test.
>
> It's
This looks like you have two different versions of Open MPI installed
on your two machines (it's hard to tell with the name
"localhost.localdomain", though -- can you name your two computers
differently so that you can tell them apart in the output?).
You need to have the same version of
Joe,
You are correct this is a ROCKS cluster. I didn't use the the --sge option when
building (I tend to stay more generic, but I should have done that).
Not sure of the OFED release but I don't admin this cluster and the owners are
picky about upgrades (tends to break Lustre).
BTW - the
Hi,
I am just getting my feet wet with openmpi and can't seem to get it
running. I installed openmpi and all it's components via yum and am able
compile and run programs with mpi locally but not across the two computers.
I set up the keyed ssh on both mechines and am able to log into
10 matches
Mail list logo