Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-16 Thread Muhammad Ansar Javed
Yes, I have tried NetPipe-Java and iperf for bandwidth and configuration test. NetPipe Java achieves maximum 9.40 Gbps while iperf achieves maximum 9.61 Gbps bandwidth. I have also tested my bandwidth program on 1Gbps Ethernet connection and it achieves 901 Mbps bandwidth. I am using the same

Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-16 Thread Ralph Castain
I apologize, but I am now confused. Let me see if I can translate: * you ran the non-MPI version of the NetPipe benchmark and got 9.5Gps on a 10Gps network * you ran iperf and got 9.61Gps - however, this has nothing to do with MPI. Just tests your TCP stack * you tested your bandwidth program

[OMPI users] FW: Performance issue of mpirun/mpi_init

2014-04-16 Thread Victor Vysotskiy
Hi, I just will confirm that the issue has been fixed. Specifically, with the latest OpenMPI v1.8.1a1r31402 we need now 2.5 hrs to complete verification and that timing is even slightly better compared to v1.6.5 (3hrs). Thank you very much for your assistance! With best regards, Victor. >I

[OMPI users] probable bug in 1.9a1r31409

2014-04-16 Thread Åke Sandgren
Hi! Found this problem when building r31409 with Pathscale 5.0 pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must have the 'overloadable' attribute void shmem_barrier_all(void) ^ ../../../../oshmem/shmem/c/profile/defines.h:193:37: note: expanded from macro

Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-16 Thread Muhammad Ansar Javed
Hi Ralph, Yes, you are right. I should have also tested NetPipe-MPI version earlier. I ran NetPipe-MPI version on 10G Ethernet and maximum bandwidth achieved is 5872 Mbps. Moreover, maximum bandwidth achieved by osu_bw test is 6080 Mbps. I have used OSU-Micro-Benchmarks version 4.3. On Wed, Apr

Re: [OMPI users] Where is the error? (MPI program in fortran)

2014-04-16 Thread Oscar Mojica
How would be the command line to compile with the option -g ? What debugger can I use? Thanks Enviado desde mi iPad > El 15/04/2014, a las 18:20, "Gus Correa" escribió: > > Or just compiling with -g or -traceback (depending on the compiler) will > give you more

Re: [OMPI users] FW: Performance issue of mpirun/mpi_init

2014-04-16 Thread Ralph Castain
Thanks Victor! Sorry for the problem, but appreciate you bringing it to our attention. Ralph On Wed, Apr 16, 2014 at 5:16 AM, Victor Vysotskiy < victor.vysots...@teokem.lu.se> wrote: > Hi, > > I just will confirm that the issue has been fixed. Specifically, with the > latest OpenMPI

Re: [OMPI users] probable bug in 1.9a1r31409

2014-04-16 Thread Åke Sandgren
On 04/16/2014 02:25 PM, Åke Sandgren wrote: Hi! Found this problem when building r31409 with Pathscale 5.0 pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must have the 'overloadable' attribute void shmem_barrier_all(void) ^

Re: [OMPI users] Where is the error? (MPI program in fortran)

2014-04-16 Thread Gus Correa
On 04/16/2014 08:30 AM, Oscar Mojica wrote: How would be the command line to compile with the option -g ? What debugger can I use? Thanks Replace any optimization flags (-O2, or similar) by -g. Check if your compiler has the -traceback flag or similar (man compiler-name). The gdb debugger

Re: [OMPI users] probable bug in 1.9a1r31409

2014-04-16 Thread Mike Dubman
+1 looks good. On Wed, Apr 16, 2014 at 4:35 PM, Åke Sandgren wrote: > On 04/16/2014 02:25 PM, Åke Sandgren wrote: > >> Hi! >> >> Found this problem when building r31409 with Pathscale 5.0 >> >> pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must

[OMPI users] problem with open mpi

2014-04-16 Thread flavienne sayou
Hello, I am Flavienne and I am a master student. I wrote a script which have to backup sequentials applications with BLCR and parallels applications with OPEN MPI. I created symbolic links of this script to /etc/rc0.d and /etc/rc6.d folders in order to be executed before boot and reboot

[OMPI users] problem with open mpi

2014-04-16 Thread flavienne sayou
Hello, I am Flavienne and I am a master student. I wrote a script which have to backup sequentials applications with BLCR and parallels applications with OPEN MPI. I created symbolic links of this script to /etc/rc0.d and /etc/rc6.d folders in order to be executed before boot and reboot processes

Re: [OMPI users] Where is the error? (MPI program in fortran)

2014-04-16 Thread Gus Correa
Hi Oscar This is a long shot, but maybe worth trying. I am assuming you're using Linux, or some form or Unix, right? You may try to increase the stack size. The default in Linux is often too small for large programs. Sometimes this may cause a segmentation fault, even if the program is correct.

Re: [OMPI users] File locking in ADIO, OpenMPI 1.6.4

2014-04-16 Thread Sasso, John (GE Power & Water, Non-GE)
Dan, On the hosts where the ADIOI lock error occurs, are there any NFS errors in /var/log/messages, dmesg, or similar that refer to lockd? --john -Original Message- From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Daniel Milroy Sent: Tuesday, April 15, 2014 10:55 AM To:

Re: [OMPI users] Where is the error? (MPI program in fortran)

2014-04-16 Thread Oscar Mojica
Gus It is a single machine and i have installed Ubuntu 12.04 LTS. I left my computer in the college but I will try to follow your advice when I can and tell you about it. Thanks Enviado desde mi iPad > El 16/04/2014, a las 14:17, "Gus Correa" escribió: > > Hi Oscar

Re: [OMPI users] probable bug in 1.9a1r31409

2014-04-16 Thread Mike Dubman
Hi, I committed your patch to the trunk. thanks M On Wed, Apr 16, 2014 at 6:49 PM, Mike Dubman wrote: > +1 > looks good. > > > On Wed, Apr 16, 2014 at 4:35 PM, Åke Sandgren > wrote: > >> On 04/16/2014 02:25 PM, Åke Sandgren wrote: >> >>> Hi!

[OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Saliya Ekanayake
Hi, We have a Cray XE6/XK7 supercomputer (BigRed II) and I was trying to get OpenMPI Java binding working on it. I couldn't find a way to utilize its Gemini interconnect, instead was running on TCP, which is inefficient. I see some work has been done along these lines in [1] and wonder if you

Re: [OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Ray Sheppard
Hello, Big Red 2 provides its own MPICH based MPI. The only case where the provided OpenMPI module becomes relevant is when you create a CCMLogin instance in Cluster Compatibility Mode (CCM). For most practical uses, those sorts of needs are better addressed on the Quarry or Mason

Re: [OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Saliya Ekanayake
I see. Also, I wanted to build OpenMPI because the provided OpenMPI didn't have Java binding. It seems at this point the only option is to use TCP in CCM in BigRed 2 and if I remember correctly Mason and Quarry don't have IB as well, correct? Thank you, Saliya On Wed, Apr 16, 2014 at 5:01

Re: [OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Nathan Hjelm
You do not need CCM to use Open MPI on with Gemini and Aries. Open MPI has natively supported both networks since 1.7.0. Please take a look at the platform files in contrib/platform/lanl/cray_xe6 for CLE 4.1 support. You should be able to just build using: configure

Re: [OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Saliya Ekanayake
Thank you Nathan, this is what I was looking for. I'll try to build OpenMPI 1.8 and get back to this thread if I run into issues. Saliya On Wed, Apr 16, 2014 at 5:19 PM, Nathan Hjelm wrote: > You do not need CCM to use Open MPI on with Gemini and Aries. Open MPI > has

Re: [OMPI users] OpenMPI with Gemini Interconnect

2014-04-16 Thread Ralph Castain
The Java bindings are written on top of the C bindings, so you'll be able to use those networks just fine from Java :-) On Wed, Apr 16, 2014 at 2:27 PM, Saliya Ekanayake wrote: > Thank you Nathan, this is what I was looking for. I'll try to build > OpenMPI 1.8 and get back

Re: [OMPI users] Problem in Open MPI (v1.8) Performance on 10G Ethernet

2014-04-16 Thread George Bosilca
Muhammad, Our configuration of TCP is tailored for 1Gbs networks, so it’s performance on 10G might be sub-optimal. That being said, the remaining of this email will be speculation as I do not have access to a 10G system to test it. There are two things that I would test to see if I can improve

Re: [hwloc-users] problem with open mpi

2014-04-16 Thread Brice Goglin
Hello, This list is for hwloc users (hwloc is a Open MPI subproject). You likely want Open MPI users instead: us...@open-mpi.org Brice Le 16/04/2014 18:44, flavienne sayou a écrit : > Hello, > I am Flavienne and I am a master student. > I wrote a script which have to backup sequentials