Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Damien
Is there a series of error messages or anything at all that you can post here? Damien On 29/10/2012 2:30 PM, Mathieu Gontier wrote: Hi guys. Finally, I compiled with /O: the options is deprecated and, like I did previously, I used /Od instead... unsuccessfully. I also compiled my code from

Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Mathieu Gontier
Hi guys. Finally, I compiled with /O: the options is deprecated and, like I did previously, I used /Od instead... unsuccessfully. I also compiled my code from a script in order to call mpicc.exe / mpiCC.exe / mpif77.exe instead of directly calling cl.exe and ifort.exe. Only the linkage is done wi

Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Mathieu Gontier
I crashes into the fortran routine calling a MPI functions. When I run the debugger, the crash seems to be in libmpi_f77.lib, but I cannot go further since the lib is not in debbug mode. Attached to this email the files of my small case. But with less aggressive options, it works. I did not know

[OMPI users] Performance/stability impact of thread support

2012-10-29 Thread Daniel Mitchell
Hi everyone, I've asked my linux distribution to repackage Open MPI with thread support (meaning configure with --enable-thread-multiple). They are willing to do this if it won't have any performance/stability hit for Open MPI users who don't need thread support (meaning everyone but me, appare

Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Rayson Ho
Mathieu, Can you include the small C program you wrote?? Rayson == Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ On Mon, Oct 29, 2012 at 12:08 PM, Damien wrote: > Mathieu, > > Where is the crash

Re: [OMPI users] Tip for HPC cluster admins

2012-10-29 Thread John Hearns
Jeff, this is very good advice. I have had many, many hours of deep joy getting to know the OOM killer and all of his wily ways. Respect the OOM Killer! On cluster I manage, the OOM killer is working, however there is a strict policy that if OOM killer kicks on in a cluster node it is excluded f

Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Damien
Mathieu, Where is the crash? Without that info, I'd suggest turning off all the optimisations and just compile it without any flags other than what you need to compile it cleanly (so no /O flags) and see if it crashes. Damien On 26/10/2012 10:27 AM, Mathieu Gontier wrote: Dear all, I am w

Re: [OMPI users] openmpi shared memory feature

2012-10-29 Thread Jeff Squyres
On Oct 29, 2012, at 11:01 AM, Ralph Castain wrote: > Wow, that would make no sense at all. If P1 and P2 are on the same node, then > we will use shared memory to do the transfer, as Jeff described. However, if > you disable shared memory, as you indicated you were doing on a previous > message

Re: [OMPI users] open mpi 1.6 with intel compilers

2012-10-29 Thread Ralph Castain
I would also suspect there is some optimization occurring in the HP test case, either via compiler or tuning, as that much speed difference isn't something commonly observed. On Oct 29, 2012, at 7:54 AM, Reuti wrote: > Am 29.10.2012 um 14:49 schrieb Giuseppe P.: > >> Thank you very much guys

Re: [OMPI users] openmpi shared memory feature

2012-10-29 Thread Ralph Castain
On Oct 29, 2012, at 7:33 AM, Mahmood Naderan wrote: > Thanks again for your answer. The reason why I had negative view to the > shared memory feature was that we were debugging the system (our program, > openmpi, cluster settings, ...) for nearly a week. To avoid any confusion, I > will use "

Re: [OMPI users] open mpi 1.6 with intel compilers

2012-10-29 Thread Reuti
Am 29.10.2012 um 14:49 schrieb Giuseppe P.: > Thank you very much guys. Now a more serious issue: > > I am using mpi with laamps (a molecular dynamics package) on a single Rack > Dell server Poweredge R810 > (4 eight-core processors, 128 Gb RAM memory). > I am now potentially interested into buy

Re: [OMPI users] openmpi shared memory feature

2012-10-29 Thread Mahmood Naderan
Thanks again for your answer. The reason why I had negative view to the shared memory feature was that we were debugging the system (our program, openmpi, cluster settings, ...) for nearly a week. To avoid any confusion, I will use "node". Here we have: 1- Node 'A' which has some pysical disks 3

Re: [OMPI users] open mpi 1.6 with intel compilers

2012-10-29 Thread Giuseppe P.
Thank you very much guys. Now a more serious issue: I am using mpi with laamps (a molecular dynamics package) on a single Rack Dell server Poweredge R810 (4 eight-core processors, 128 Gb RAM memory). I am now potentially interested into buying the Intel MPI 4.1 libraries, and I am trying them by e

Re: [OMPI users] openmpi shared memory feature

2012-10-29 Thread Jeff Squyres
Your original question stuck in my brain over the weekend, and I *think* you may have been asking a different question than I originally answered. Even though you say we answered your question, I'm going to post my ruminations here anyway. :-) You might have been asking about how a shared mem

Re: [OMPI users] System CPU of openmpi-1.7rc1

2012-10-29 Thread tmishima
Hi, I use Infiniband(--mca btl openib,self). I know that waiting allreduce might cause high cpu consumption. I intended to create such a situation to check system cpu usage when some processes are kept waiting. I'm afraid that it might affect execution speed. Indeed, my application (MUMPS base)

Re: [OMPI users] System CPU of openmpi-1.7rc1

2012-10-29 Thread Ralph Castain
Not sure why they would be different, though there are changes to the code, of course. Would have to dig deep to find out why - perhaps one of the BTL developers will chime in here. Which transport are you using (Infiniband, TCP, ?)? As for why the cpu gets consumed, it's that allreduce that is