Re: [OMPI users] Calling a variable from another processor

2014-01-17 Thread Pradeep Jha
Thanks a ton Christoph. That helps a lot.


2014/1/17 Christoph Niethammer <nietham...@hlrs.de>

> Hello,
>
> Find attached a minimal example - hopefully doing what you intended.
>
> Regards
> Christoph
>
> --
>
> Christoph Niethammer
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
>
>
>
> - Ursprüngliche Mail -
> Von: "Pradeep Jha" <prad...@ccs.engg.nagoya-u.ac.jp>
> An: "Open MPI Users" <us...@open-mpi.org>
> Gesendet: Freitag, 10. Januar 2014 10:23:40
> Betreff: Re: [OMPI users] Calling a variable from another processor
>
>
>
> Thanks for your responses. I am still not able to figure it out. I will
> further simply my problem statement. Can someone please help me with a
> fortran90 code for that.
>
>
> 1) I have N processors each with an array A of size S
> 2) On any random processor (say rank X), I calculate the two integer
> values, Y and Z. (0<=Y 3) On processor X, I want to get the value of A(Z) on processor Y.
>
>
> This operation will happen parallely on each processor. Can anyone please
> help me with this?
>
>
>
>
>
>
>
> 2014/1/9 Jeff Hammond < jeff.scie...@gmail.com >
>
>
> One sided is quite simple to understand. It is like file io. You
> read/write (get/put) to a memory object. If you want to make it hard to
> screw up, use passive target bss wrap you calls in lock/unlock so every
> operation is globally visible where it's called.
>
> I've never deadlocked RMA while p2p is easy to hang for nontrivial
> patterns unless you only do nonblocking plus waitall.
>
> If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM
> implementations over MPI-3 already (I wrote both...).
>
> The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2
> RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3
> and OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to
> work, but these are readily available on every platform normal people use
> (BGQ is the only system missing, and that will be resolved soon). I've run
> MPI-3 on my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI
> (MPICH).
>
> Best,
>
> Jeff
>
> Sent from my iPhone
>
>
>
> > On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" <
> jsquy...@cisco.com > wrote:
> >
> > MPI one-sided stuff is actually pretty complicated; I wouldn't suggest
> it for a beginner (I don't even recommend it for many MPI experts ;-) ).
> >
> > Why not look at the MPI_SOURCE in the status that you got back from the
> MPI_RECV? In fortran, it would look something like (typed off the top of my
> head; forgive typos):
> >
> > -
> > integer, dimension(MPI_STATUS_SIZE) :: status
> > ...
> > call MPI_Recv(buffer, ..., status, ierr)
> > -
> >
> > The rank of the sender will be in status(MPI_SOURCE).
> >
> >
> >> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < nietham...@hlrs.de> 
> >> wrote:
> >>
> >> Hello,
> >>
> >> I suggest you have a look onto the MPI one-sided functionality (Section
> 11 of the MPI Spec 3.0).
> >> Create a window to allow the other processes to access the arrays A
> directly via MPI_Get/MPI_Put.
> >> Be aware of synchronization which you have to implement via
> MPI_Win_fence or manual locking.
> >>
> >> Regards
> >> Christoph
> >>
> >> --
> >>
> >> Christoph Niethammer
> >> High Performance Computing Center Stuttgart (HLRS)
> >> Nobelstrasse 19
> >> 70569 Stuttgart
> >>
> >> Tel: ++49(0)711-685-87203
> >> email: nietham...@hlrs.de
> >> http://www.hlrs.de/people/niethammer
> >>
> >>
> >>
> >> - Ursprüngliche Mail -
> >> Von: "Pradeep Jha" < prad...@ccs.engg.nagoya-u.ac.jp >
> >> An: "Open MPI Users" < us...@open-mpi.org >
> >> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
> >> Betreff: [OMPI users] Calling a variable from another processor
> >>
> >>
> >>
> >>
> >>
> >> I am writing a parallel program in Fortran77. I have the following
> problem: 1) I have N number of processors.
> >> 2) Each processor contains an array A of size S.
> >> 3) Using some function, on every processor (say rank X)

Re: [OMPI users] Calling a variable from another processor

2014-01-10 Thread Pradeep Jha
Thanks for your responses. I am still not able to figure it out. I will
further simply my problem statement. Can someone please help me with a
fortran90 code for that.

1) I have N processors each with an array A of size S
2) On any random processor (say rank X), I calculate the two integer
values, Y and Z. (0<=Y

> One sided is quite simple to understand. It is like file io. You
> read/write (get/put) to a memory object. If you want to make it hard to
> screw up, use passive target bss wrap you calls in lock/unlock so every
> operation is globally visible where it's called.
>
> I've never deadlocked RMA while p2p  is easy to hang for nontrivial
> patterns unless you only do nonblocking plus waitall.
>
> If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM
> implementations over MPI-3 already (I wrote both...).
>
> The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2
> RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3
> and OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to
> work, but these are readily available on every platform normal people use
> (BGQ is the only system missing, and that will be resolved soon). I've run
> MPI-3 on my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI
> (MPICH).
>
> Best,
>
> Jeff
>
> Sent from my iPhone
>
> > On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" <
> jsquy...@cisco.com> wrote:
> >
> > MPI one-sided stuff is actually pretty complicated; I wouldn't suggest
> it for a beginner (I don't even recommend it for many MPI experts ;-) ).
> >
> > Why not look at the MPI_SOURCE in the status that you got back from the
> MPI_RECV?  In fortran, it would look something like (typed off the top of
> my head; forgive typos):
> >
> > -
> > integer, dimension(MPI_STATUS_SIZE) :: status
> > ...
> > call MPI_Recv(buffer, ..., status, ierr)
> > -
> >
> > The rank of the sender will be in status(MPI_SOURCE).
> >
> >
> >> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer <nietham...@hlrs.de>
> wrote:
> >>
> >> Hello,
> >>
> >> I suggest you have a look onto the MPI one-sided functionality (Section
> 11 of the MPI Spec 3.0).
> >> Create a window to allow the other processes to access the arrays A
> directly via MPI_Get/MPI_Put.
> >> Be aware of synchronization which you have to implement via
> MPI_Win_fence or manual locking.
> >>
> >> Regards
> >> Christoph
> >>
> >> --
> >>
> >> Christoph Niethammer
> >> High Performance Computing Center Stuttgart (HLRS)
> >> Nobelstrasse 19
> >> 70569 Stuttgart
> >>
> >> Tel: ++49(0)711-685-87203
> >> email: nietham...@hlrs.de
> >> http://www.hlrs.de/people/niethammer
> >>
> >>
> >>
> >> - Ursprüngliche Mail -
> >> Von: "Pradeep Jha" <prad...@ccs.engg.nagoya-u.ac.jp>
> >> An: "Open MPI Users" <us...@open-mpi.org>
> >> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
> >> Betreff: [OMPI users] Calling a variable from another processor
> >>
> >>
> >>
> >>
> >>
> >> I am writing a parallel program in Fortran77. I have the following
> problem: 1) I have N number of processors.
> >> 2) Each processor contains an array A of size S.
> >> 3) Using some function, on every processor (say rank X), I calculate
> the value of two integers Y and Z, where Z different on every processor)
> >> 4) I want to get the value of A(Z) on processor Y to processor X.
> >>
> >> I thought of first sending the numerical value X to processor Y from
> processor X and then sending A(Z) from processor Y to processor X. But it
> is not possible as processor Y does not know the numerical value X and so
> it won't know from which processor to receive the numerical value X from.
> >>
> >> I tried but I haven't been able to come up with any code which can
> implement this action. So I am not posting any codes.
> >>
> >> Any suggestions?
> >>
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > --
> > Jeff Squyres
> > jsquy...@cisco.com
> > For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] Calling a variable from another processor

2014-01-09 Thread Pradeep Jha
I am writing a parallel program in Fortran77. I have the following problem:

1) I have N number of processors.
2) Each processor contains an array A of size S.
3) Using some function, on every processor (say rank X), I calculate
the value of two integers Y and Z, where Z

Re: [OMPI users] mpirun error

2013-04-10 Thread Pradeep Jha
Hello,

thanks for the responses. But I have no idea how to do that. Which
environment variables should I look at? How do I find out where is the
openMPI installed and make the mpif90 use the openMPI?

Thanks,
Pradeep


2013/4/2 Elken, Tom <tom.el...@intel.com>

> > The Intel Fortran 2013 compiler comes with support for Intel's MPI
> runtime and
> > you are getting that instead of OpenMPI.   You need to fix your path for
> all the
> > shells you use.
> [Tom]
> Agree with Michael, but thought I would note something additional.
> If you are using OFED's mpi-selector to select Open MPI, it will set up
> the path to Open MPI before a startup script like  .bashrc gets processed.
> So if you source the Intel Compiler's compilervars.sh, you will get
> Intel's mpirt in your path before the OpenMPI's bin directory.
>
> One workaround is to source the following _after_ you source the Intel
> Compiler's compilervars.sh in your start-up scripts:
> . /var/mpi-selector/data/openmpi_...sh
>
> -Tom
>
> >
> > On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote:
> >
> > > /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:
> > /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpivars.sh: No such
> file
> > or directory
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] mpirun error

2013-04-01 Thread Pradeep Jha
Hello,

When I am trying to run a perfectly running parallel code on a new Linux
machine, using the following command:

--
mpirun -np 16 name_of_executable
--

I am getting the following error:

--
/opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:
/opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpivars.sh: No such
file or directory
--

This machine already has installed in it. Any ideas what can be the problem?

Thanks,
Pradeep


Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
oh! it works now. Thanks a lot and sorry about my negligence.


2013/3/1 Ake Sandgren <ake.sandg...@hpc2n.umu.se>

> On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote:
> > Sorry for those mistakes. I addressed all the three problems
> > - I put "implicit none" at the top of main program
> > - I initialized tag.
> > - changed MPI_INT to MPI_INTEGER
> > - "send_length" should be just "send", it was a typo.
> >
> >
> > But the code is still hanging in sendrecv. The present form is below:
> >
>
> "tag" isn't iniitalized to anything so it may very well be totally
> different in all the processes.
> ALWAYS initialize variables before using them.
>
> > main.f
> >
> >
> >   program   main
> >
> >   implicit none
> >
> >   include  'mpif.h'
> >
> >   integer me, np, ierror
> >
> >   call  MPI_init( ierror )
> >   call  MPI_comm_rank( mpi_comm_world, me, ierror )
> >   call  MPI_comm_size( mpi_comm_world, np, ierror )
> >
> >   call sendrecv(me, np)
> >
> >   call mpi_finalize( ierror )
> >
> >   stop
> >   end
> >
> > sendrecv.f
> >
> >
> >   subroutine sendrecv(me, np)
> >
> >   include 'mpif.h'
> >
> >   integer np, me, sender, tag
> >   integer, dimension(mpi_status_size) :: status
> >
> >   integer, dimension(1) :: recv, send
> >
> >   if (me.eq.0) then
> >
> >  do sender = 1, np-1
> > call mpi_recv(recv, 1, mpi_integer, sender, tag,
> >  &   mpi_comm_world, status, ierror)
> >
> >  end do
> >   end if
> >
> >   if ((me.ge.1).and.(me.lt.np)) then
> >  send(1) = me*12
> >
> >  call mpi_send(send, 1, mpi_integer, 0, tag,
> >  &mpi_comm_world, ierror)
> >   end if
> >
> >   return
> >   end
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
Sorry for those mistakes. I addressed all the three problems
- I put "implicit none" at the top of main program
- I initialized tag.
- changed MPI_INT to MPI_INTEGER
- "send_length" should be just "send", it was a typo.

But the code is still hanging in sendrecv. The present form is below:

 main.f

  program   main

  implicit none

  include  'mpif.h'

  integer me, np, ierror

  call  MPI_init( ierror )
  call  MPI_comm_rank( mpi_comm_world, me, ierror )
  call  MPI_comm_size( mpi_comm_world, np, ierror )

  call sendrecv(me, np)

  call mpi_finalize( ierror )

  stop
  end

sendrecv.f

  subroutine sendrecv(me, np)

  include 'mpif.h'

  integer np, me, sender, tag
  integer, dimension(mpi_status_size) :: status

  integer, dimension(1) :: recv, send

  if (me.eq.0) then

 do sender = 1, np-1
call mpi_recv(recv, 1, mpi_integer, sender, tag,
 &   mpi_comm_world, status, ierror)

 end do
  end if

  if ((me.ge.1).and.(me.lt.np)) then
 send(1) = me*12

 call mpi_send(send, 1, mpi_integer, 0, tag,
 &mpi_comm_world, ierror)
  end if

  return
  end



2013/3/1 Jeff Squyres (jsquyres) <jsquy...@cisco.com>

> On Feb 28, 2013, at 9:59 AM, Pradeep Jha <prad...@ccs.engg.nagoya-u.ac.jp>
> wrote:
>
> > Is it possible to call the MPI_send and MPI_recv commands inside a
> subroutine and not the main program?
>
> Yes.
>
> > I have written a minimal program for what I am trying to do. It is
> compiling fine but it is not working. The program just hangs in the
> "sendrecv" subroutine. Any ideas how can I do it?
>
> You seem to have several errors in the sendrecv subroutine.  I would
> strongly encourage you to use "implicit none" to avoid many of these
> errors.  Here's a few errors I see offhand:
>
> - tag is not initialized
> - what's send_length(1)?
> - use MPI_INTEGER, not MPI_INT (MPI_INT = C int, MPI_INTEGER = Fortran
> INTEGER)
>
>
> > main.f
> >
> >
> >   program   main
> >
> >   include  'mpif.h'
> >
> >   integer me, np, ierror
> >
> >   call  MPI_init( ierror )
> >   call  MPI_comm_rank( mpi_comm_world, me, ierror )
> >   call  MPI_comm_size( mpi_comm_world, np, ierror )
> >
> >   call sendrecv(me, np)
> >
> >   call mpi_finalize( ierror )
> >
> >   stop
> >   end
> >
> > sendrecv.f
> >
> >
> >   subroutine sendrecv(me, np)
> >
> >   include 'mpif.h'
> >
> >   integer np, me, sender
> >   integer, dimension(mpi_status_size) :: status
> >
> >   integer, dimension(1) :: recv, send
> >
> >   if (me.eq.0) then
> >
> >  do sender = 1, np-1
> > call mpi_recv(recv, 1, mpi_int, sender, tag,
> >  &   mpi_comm_world, status, ierror)
> >
> >  end do
> >   end if
> >
> >   if ((me.ge.1).and.(
> > me.lt.np
> > )) then
> >  send_length(1) = me*12
> >
> >  call mpi_send(send, 1, mpi_int, 0, tag,
> >  &mpi_comm_world, ierror)
> >   end if
> >
> >   return
> >   end
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] Calling MPI_send MPI_recv from a fortran subroutine

2013-02-28 Thread Pradeep Jha
Is it possible to call the MPI_send and MPI_recv commands inside a
subroutine and not the main program? I have written a minimal program for
what I am trying to do. It is compiling fine but it is not working. The
program just hangs in the "sendrecv" subroutine. Any ideas how can I do it?

main.f

  program   main

  include  'mpif.h'

  integer me, np, ierror

  call  MPI_init( ierror )
  call  MPI_comm_rank( mpi_comm_world, me, ierror )
  call  MPI_comm_size( mpi_comm_world, np, ierror )

  call sendrecv(me, np)

  call mpi_finalize( ierror )

  stop
  end

sendrecv.f

  subroutine sendrecv(me, np)

  include 'mpif.h'

  integer np, me, sender
  integer, dimension(mpi_status_size) :: status

  integer, dimension(1) :: recv, send

  if (me.eq.0) then

 do sender = 1, np-1
call mpi_recv(recv, 1, mpi_int, sender, tag,
 &   mpi_comm_world, status, ierror)

 end do
  end if

  if ((me.ge.1).and.(me.lt.np)) then
 send_length(1) = me*12

 call mpi_send(send, 1, mpi_int, 0, tag,
 &mpi_comm_world, ierror)
  end if

  return
  end


Re: [OMPI users] MPI send recv confusion

2013-02-21 Thread Pradeep Jha
2013/2/21 Gus Correa 

> two types are the same size,
> but I wonder if somehow the two type names are interchangeable
> in OpenMPI (I would guess they're not),
> although declared
>

Hello,

No, I didnt had to change that. They both work fine for me.

Pradeep


Re: [OMPI users] MPI send recv confusion

2013-02-18 Thread Pradeep Jha
That was careless of me. Thanks for pointing it out. Declaring "status",
"ierr" and putting "implicit none" solved the problem.

Thanks again.


2013/2/19 Jeff Squyres (jsquyres) <jsquy...@cisco.com>

> +1.  The problem is that you didn't declare status or ierr.  Since you
> didn't declare status, you're buffer overflowing, and random Bad Things
> happen from there.
>
> You should *always* "implicit none" to catch these kinds of errors.
>
>
> On Feb 18, 2013, at 2:02 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
>
> > Hi Pradeep
> >
> > For what it is worth, in the MPI Fortran bindings/calls the
> > datatype to use is "MPI_INTEGER", not "mpi_int" (which you used;
> > MPI_INT is in the MPI C bindings):
> >
> > http://linux.die.net/man/3/mpi_integer
> >
> > Also, just to prevent variables to inadvertently come with
> > the wrong type, you could add:
> >
> > implicit none
> >
> > to the top of your code.
> > You already have a non-declared "ierr" in "call mpi_send".
> > (You declared "ierror" as an integer, but not "ierr".)
> > Although this one may not cause any harm;
> > names starting with "i" are integers by default, in old Fortran.
> >
> > I hope this helps,
> > Gus Correa
> >
> >
> > On 02/18/2013 01:26 PM, jody wrote:
> >> Hi Pradeep
> >>
> >> I am not sure if this is the reason, but usually it is a bad idea to
> >> force an order of receives (such as you do in your receive loop -
> >> first from sender 1 then from sender 2 then from sender 3)
> >> Unless you implement it so, there is no guarantee the sends are
> >> performed in this order. B
> >>
> >> It is better if you accept messages from all senders (MPI_ANY_SOURCE)
> >> instead of particular ranks and then check where the
> >> message came from by examining the status fields
> >> (http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
> >>
> >> Hope this helps
> >>   Jody
> >>
> >>
> >> On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
> >> <prad...@ccs.engg.nagoya-u.ac.jp>  wrote:
> >>> I have attached a sample of the MPI program I am trying to write. When
> I run
> >>> this program using "mpirun -np 4 a.out", my output is:
> >>>
> >>>  Sender:1
> >>>  Data received from1
> >>>  Sender:2
> >>>  Data received from1
> >>>  Sender:2
> >>>
> >>> And the run hangs there. I dont understand why does the "sender"
> variable
> >>> change its value after MPI_recv? Any ideas?
> >>>
> >>> Thank you,
> >>>
> >>> Pradeep
> >>>
> >>>
> >>>  program mpi_test
> >>>
> >>>   include  'mpif.h'
> >>>
> >>> !( Initialize variables )
> >>>   integer, dimension(3) :: recv, send
> >>>
> >>>   integer :: sender, np, rank, ierror
> >>>
> >>>   call  mpi_init( ierror )
> >>>   call  mpi_comm_rank( mpi_comm_world, rank, ierror )
> >>>   call  mpi_comm_size( mpi_comm_world, np, ierror )
> >>>
> >>> !( Main program )
> >>>
> >>> ! receive the data from the other processors
> >>>   if (rank.eq.0) then
> >>>  do sender = 1, np-1
> >>> print *, "Sender: ", sender
> >>> call mpi_recv(recv, 3, mpi_int, sender, 1,
> >>>  &mpi_comm_world, status, ierror)
> >>> print *, "Data received from ",sender
> >>>  end do
> >>>   end if
> >>>
> >>> !   send the data to the main processor
> >>>   if (rank.ne.0) then
> >>>  send(1) = 3
> >>>  send(2) = 4
> >>>  send(3) = 4
> >>>  call mpi_send(send, 3, mpi_int, 0, 1, mpi_comm_world, ierr)
> >>>   end if
> >>>
> >>>
> >>> !( clean up )
> >>>   call mpi_finalize(ierror)
> >>>
> >>>   return
> >>>   end program mpi_test`
> >>>
> >>>
> >>> ___
> >>> users mailing list
> >>> us...@open-mpi.org
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] MPI send recv confusion

2013-02-18 Thread Pradeep Jha
I have attached a sample of the MPI program I am trying to write. When I
run this program using "mpirun -np 4 a.out", my output is:

 Sender:1
 Data received from1
 Sender:2
 Data received from1
 Sender:2

And the run hangs there. I dont understand why does the "sender" variable
change its value after MPI_recv? Any ideas?

Thank you,

Pradeep


 program mpi_test

  include  'mpif.h'

!( Initialize variables )
  integer, dimension(3) :: recv, send

  integer :: sender, np, rank, ierror

  call  mpi_init( ierror )
  call  mpi_comm_rank( mpi_comm_world, rank, ierror )
  call  mpi_comm_size( mpi_comm_world, np, ierror )

!( Main program )

! receive the data from the other processors
  if (rank.eq.0) then
 do sender = 1, np-1
print *, "Sender: ", sender
call mpi_recv(recv, 3, mpi_int, sender, 1,
 &   mpi_comm_world, status, ierror)
print *, "Data received from ",sender
 end do
  end if

!   send the data to the main processor
  if (rank.ne.0) then
 send(1) = 3
 send(2) = 4
 send(3) = 4
 call mpi_send(send, 3, mpi_int, 0, 1, mpi_comm_world, ierr)
  end if


!( clean up )
  call mpi_finalize(ierror)

  return
  end program mpi_test`


Re: [OMPI users] Basic question about MPI

2013-01-29 Thread Pradeep Jha
Thank you for your response. That makes it clear.

A related question. When I run a general program on a machine, say a
Internet browser/Media player to watch a movie by clicking on the icon of
the avi file in the folder (nothing from the terminal), how many cores does
it use? In that case also does it just run on one core?

Generally, how is the work load divided on the cores on a computer. Does
every process that I start uses a new core, or the work load is distributed
over all the available cores?

Thank you


2013/1/29 Jens Glaser <jgla...@umn.edu>

> Hi Pradeep,
>
> On Jan 28, 2013, at 11:16 PM, Pradeep Jha <pradeep.kumar@gmail.com>
> wrote:
>
> I have a very basic question about MPI.
>
> I have a computer with 8 processors (each with 8 cores).  What is the
> difference between if I run a program simply by "./program" and "mpirun -np
> 8 /path/to/program" ? In the first case does the program just use one
> processor out of the 8? If I want the program to use all the 8 processors
> at the same time, then I have to do with mpirun?
>
> If you run the application as  "./program", it will most likely use only
> one core on one processor, i.e. 1/64 of your machine, if the latter really
> has eight CPUs with 8 cores each, as you write.  I have not heard of such
> machines, but you may be right.. There is an exception, namely if your
> program uses multi-threading (OpenMP etc.), then it could use more than one
> core even if you start it without mpirun.
>
> However, if you do start it with mpirun, a number "np" of processes is
> launched on different cores. Provided your node really has 8 physical CPUs
> with 8 cores each and you want your program to utilize all your 64 cores,
> you should start it with -np 64.
>
> Jens
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] Basic question about MPI

2013-01-29 Thread Pradeep Jha
Hello, 

I have a very basic question about MPI. 

I have a computer with 8 processors (each with 8 cores).  What is the 
difference between if I run a program simply by "./program" and "mpirun -np 8 
/path/to/program" ? In the first case does the program just use one processor 
out of the 8? If I want the program to use all the 8 processors at the same 
time, then I have to do with mpirun? 

Something fundamental is buggin me. 

Thank you, 
Pradeep