Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 19:50 schrieb Reuti: > Am 28.02.2013 um 19:21 schrieb Ralph Castain: > >> >> On Feb 28, 2013, at 9:53 AM, Reuti wrote: >> >>> Am 28.02.2013 um 17:54 schrieb Ralph Castain: >>> Hmmmthe problem is that we are mapping procs using the

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 19:21 schrieb Ralph Castain: > > On Feb 28, 2013, at 9:53 AM, Reuti wrote: > >> Am 28.02.2013 um 17:54 schrieb Ralph Castain: >> >>> Hmmmthe problem is that we are mapping procs using the provided slots >>> instead of dividing the slots by

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Ralph Castain
On Feb 28, 2013, at 9:53 AM, Reuti wrote: > Am 28.02.2013 um 17:54 schrieb Ralph Castain: > >> Hmmmthe problem is that we are mapping procs using the provided slots >> instead of dividing the slots by cpus-per-proc. So we put too many on the >> first node, and

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 17:54 schrieb Ralph Castain: > Hmmmthe problem is that we are mapping procs using the provided slots > instead of dividing the slots by cpus-per-proc. So we put too many on the > first node, and the backend daemon aborts the job because it lacks sufficient > processors for

Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
oh! it works now. Thanks a lot and sorry about my negligence. 2013/3/1 Ake Sandgren > On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote: > > Sorry for those mistakes. I addressed all the three problems > > - I put "implicit none" at the top of main program > > - I

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Ralph Castain
Hmmmthe problem is that we are mapping procs using the provided slots instead of dividing the slots by cpus-per-proc. So we put too many on the first node, and the backend daemon aborts the job because it lacks sufficient processors for cpus-per-proc=2. Given that there are no current

Re: [OMPI users] High cpu usage

2013-02-28 Thread Jingcha Joba
Hi , First, I don't see any cpu utilization but %time (of a function wrt others in a process/application). Generally for high cpu utilization, there could be many reason. Two of them that comes to my mind is, 1. Depends on the network stack, eg. the "tcp" way will use more CPU than the

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 17:29 schrieb Ralph Castain: > > On Feb 28, 2013, at 6:17 AM, Reuti wrote: > >> Am 28.02.2013 um 08:58 schrieb Reuti: >> >>> Am 28.02.2013 um 06:55 schrieb Ralph Castain: >>> I don't off-hand see a problem, though I do note that your

Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Ake Sandgren
On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote: > Sorry for those mistakes. I addressed all the three problems > - I put "implicit none" at the top of main program > - I initialized tag. > - changed MPI_INT to MPI_INTEGER > - "send_length" should be just "send", it was a typo. > > > But

Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Ralph Castain
I don't see tag being set to any value On Feb 28, 2013, at 8:24 AM, Pradeep Jha wrote: > Sorry for those mistakes. I addressed all the three problems > - I put "implicit none" at the top of main program > - I initialized tag. > - changed MPI_INT to

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Ralph Castain
On Feb 28, 2013, at 6:17 AM, Reuti wrote: > Am 28.02.2013 um 08:58 schrieb Reuti: > >> Am 28.02.2013 um 06:55 schrieb Ralph Castain: >> >>> I don't off-hand see a problem, though I do note that your "working" >>> version incorrectly reports the universe size as 2!

Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
Sorry for those mistakes. I addressed all the three problems - I put "implicit none" at the top of main program - I initialized tag. - changed MPI_INT to MPI_INTEGER - "send_length" should be just "send", it was a typo. But the code is still hanging in sendrecv. The present form is below:

Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Jeff Squyres (jsquyres)
On Feb 28, 2013, at 9:59 AM, Pradeep Jha wrote: > Is it possible to call the MPI_send and MPI_recv commands inside a subroutine > and not the main program? Yes. > I have written a minimal program for what I am trying to do. It is compiling > fine but it is

[OMPI users] High cpu usage

2013-02-28 Thread Bokassa
Hi, I notice that a simple MPI program in which rank 0 sends 4 bytes to each rank and receives a reply uses a considerable amount of CPU in system call.s % time seconds usecs/call callserrors syscall -- --- --- - - 61.10

Re: [OMPI users] MPI_Abort under slurm

2013-02-28 Thread Bokassa
Thanks Ralph, you were right I was not aware of --kill-on-bad-exit and KillOnBadExit, setting it to 1 shuts down the entire MPI job when MPI_Abort() is called. I was thinking this MPI protocol message was just transported by slurm and then each task would exit. Oh well I should not guess the

[OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
Is it possible to call the MPI_send and MPI_recv commands inside a subroutine and not the main program? I have written a minimal program for what I am trying to do. It is compiling fine but it is not working. The program just hangs in the "sendrecv" subroutine. Any ideas how can I do it? main.f

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 08:58 schrieb Reuti: > Am 28.02.2013 um 06:55 schrieb Ralph Castain: > >> I don't off-hand see a problem, though I do note that your "working" version >> incorrectly reports the universe size as 2! > > Yes, it was 2 in the case when it was working by giving only two hostnames

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Reuti
Am 28.02.2013 um 06:55 schrieb Ralph Castain: > I don't off-hand see a problem, though I do note that your "working" version > incorrectly reports the universe size as 2! Yes, it was 2 in the case when it was working by giving only two hostnames without any dedicated slot count. What should it

Re: [OMPI users] Option -cpus-per-proc 2 not working with given machinefile?

2013-02-28 Thread Ralph Castain
I don't off-hand see a problem, though I do note that your "working" version incorrectly reports the universe size as 2! I'll have to take a look at this and get back to you on it. On Feb 27, 2013, at 3:15 PM, Reuti wrote: > Hi, > > I have an issue using the