Hello,
I am having trouble with the *** Assembler section of the GNU autoconf
step in trying to build OpenMPI version 1.6 on an HP AlphaServer GS160
running Tru64unix version 5.1B-6:
# uname -a
OSF1 zozma.cts.cwu.edu V5.1 2650 alpha
The output is of the ./configure run
zozma(bash)% ./configure
Hello Jeff,
thank you very much for your help. You were right with your suggestion
that one of our system commands is responsible for the segmentation
fault. After splitting the command in config.status I found out that
gawk was responsible. We installed the latest version and now
everything
Hello,
> Can you try the attached patch and tell me if you get sctp configured?
Yes, it works! Thank you very much for your help.
> > This looks like a missing check in the sctp configure.m4. I am
> > working on a patch.
> >
> > --td
> >
> > On 6/5/2012 10:10 AM, Siegmar Gross wrote:
> >>
Try: ps -elf | grep hello
This should list out all the processes named hello.
In that output is the pid (should be the 4th column) of the process and
you give your debugger that pid. For example if the pid was 1234 you'd
give "gdb - 1234".
Actually Jeff's suggestion of this being a firewall
Exxxcellent.
Good luck!
On Jun 7, 2012, at 3:43 AM, Duke wrote:
> On 6/7/12 5:32 PM, Jeff Squyres wrote:
>> Check to ensure that you have firewalls disabled between your two machines;
>> that's a common cause of hanging (i.e., Open MPI is trying to open
>> connections and/or send data
Another sanity think to try is see if you can run your test program on
just one of the nodes? If that works more than likely MPI is having
issues setting up connections between the nodes.
--td
On 6/7/2012 6:06 AM, Duke wrote:
Hi again,
Somehow the verbose flag (-v) did not work for me. I
On 6/7/12 5:32 PM, Jeff Squyres wrote:
Check to ensure that you have firewalls disabled between your two machines;
that's a common cause of hanging (i.e., Open MPI is trying to open connections
and/or send data between your two nodes, and the packets are getting
black-holed at the other
On 6/7/12 5:31 PM, TERRY DONTJE wrote:
Can you get on one of the nodes and see the job's processes? If so
can you then attach a debugger to it and get a stack? I wonder if the
processes are stuck in MPI_Init?
Thanks Terry for your suggestion, but please let me know how would I do
it? I can
Check to ensure that you have firewalls disabled between your two machines;
that's a common cause of hanging (i.e., Open MPI is trying to open connections
and/or send data between your two nodes, and the packets are getting
black-holed at the other side).
Open MPI needs to be able to
Can you get on one of the nodes and see the job's processes? If so can
you then attach a debugger to it and get a stack? I wonder if the
processes are stuck in MPI_Init?
--td
On 6/7/2012 6:06 AM, Duke wrote:
Hi again,
Somehow the verbose flag (-v) did not work for me. I tried
Hi again,
Somehow the verbose flag (-v) did not work for me. I tried
--debug-daemon and got:
[mpiuser@fantomfs40a ~]$ mpirun --debug-daemons -np 3 --machinefile
/home/mpiuser/.mpi_hostfile ./test/mpihello
Daemon was launched on hp430a - beginning to initialize
Daemon [[34432,0],1] checking
Hi Jingha,
On 6/7/12 4:28 PM, Jingcha Joba wrote:
Hello Duke,
Welcome to the forum.
The way openmpi schedules by default is to fill all the slots in a
host, before moving on to next host.
Check this link for some info:
http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
Thanks
Hello Duke,
Welcome to the forum.
The way openmpi schedules by default is to fill all the slots in a host,
before moving on to next host.
Check this link for some info:
http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
--
Jingcha
On Thu, Jun 7, 2012 at 2:11 AM, Duke
Hi folks,
Please be gentle to the newest member of openMPI, I am totally new to
this field. I just built a test cluster with 3 boxes on Scientific Linux
6.2 and openMPI (Open MPI 1.5.3), and I wanted to test how the cluster
works but I cant figure out what was/is happening. On my master node,
14 matches
Mail list logo