Re: [OMPI users] issue with addresses

2012-07-26 Thread Priyesh Srivastava
hello  Hristo

Thank you for taking a look at the program and the output.
The detailed explanation was very helpful. I also found out that the
signature of a derived datatype is the specific sequence of the primitive
datatypes and is independent of the displacements. So the differences in
the relative addresses will not cause a problem.

thanks again  :)
priyesh

On Wed, Jul 25, 2012 at 12:00 PM, <users-requ...@open-mpi.org> wrote:

> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>1. Re: issue with addresses (Iliev, Hristo)
>2. Re: Extent of Distributed Array Type? (George Bosilca)
>3. Re: Extent of Distributed Array Type? (Jeff Squyres)
>4. Re: Extent of Distributed Array Type? (Richard Shaw)
>5. Mpi_leave_pinned=1 is thread safe? (tmish...@jcity.maeda.co.jp)
>6. Re: Fortran90 Bindings (Kumar, Sudhir)
>7. Re: Fortran90 Bindings (Damien)
>
>
> --
>
> Message: 1
> Date: Tue, 24 Jul 2012 17:10:33 +
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <18d6fe2f-7a68-4d1a-94fe-c14058ba4...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, Priyesh,
>
> The output of your program is pretty much what one would expect.
> 140736841025492 is 0x7FFFD96A87D4 which pretty much corresponds to a
> location in the stack, which is to be expected as a and b are scalar
> variables and most likely end up on the stack. As c is array its location
> is compiler-dependent. Some compilers put small arrays on the stack while
> others make them global or allocate them on the heap. In your case 0x6ABAD0
> could either be somewhere in the BSS (where uninitialised global variables
> reside) or in the heap, which starts right after BSS (I would say it is the
> BSS). If the array is placed in BSS its location is fixed with respect to
> the image base.
>
> Linux by default implements partial Address Space Layout Randomisation
> (ASLR) by placing the program stack at slightly different location with
> each run (this is to make remote stack based exploits harder). That's why
> you see different addresses for variables on the stack. But things in BSS
> would pretty much have the same addresses when the code is executed
> multiple times or on different machines having the same architecture and
> similar OS with similar settings since executable images are still loaded
> at the same base virtual address.
>
> Having different addresses is not an issue for MPI as it only operates
> with pointers which are local to the process as well as with relative
> offsets. You pass the MPI_Send or MPI_Recv function the address of the data
> buffer in the current process and it has nothing to do with where those
> buffers are located in the other processes. Note also that MPI supports
> heterogeneous computing, e.g. the sending process might be 32-bit and the
> receiving one 64-bit. In this scenario it is quite probable that the
> addresses will differ by very large margin (e.g. the stack address of your
> 64-bit output is not even valid on 32-bit system).
>
> Hope that helps more :)
>
> Kind regards,
> Hristo
>
> On 24.07.2012, at 02:02, Priyesh Srivastava wrote:
>
> > hello  Hristo
> >
> > Thank you for your reply. I was able to understand some parts of your
> response, but still had some doubts due to my lack of knowledge about the
> way memory is allocated.
> >
> > I have created a small sample program and the resulting output which
> will help me  pin point my question.
> > The program is :
> >
> >
> > program test
> >   include'mpif.h'
> >
> >   integer a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
> >
> >   integer(kind=MPI_ADDRESS_KIND) add(3)
> >
> >
> >   call MPI_INIT(ierr)
> >   call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr)
> >   call MPI_GET_ADDRESS(a,add(1),ierr)
> >   write(*,*) 'address of a ,id ', add(1), id
> >   call MPI_GET_ADDRESS(b,add(2),ierr)
> >   write(*,*) 'address of b,id ', add(2), 

Re: [OMPI users] issue with addresses

2012-07-24 Thread Iliev, Hristo
Hi, Priyesh,

The output of your program is pretty much what one would expect. 
140736841025492 is 0x7FFFD96A87D4 which pretty much corresponds to a location 
in the stack, which is to be expected as a and b are scalar variables and most 
likely end up on the stack. As c is array its location is compiler-dependent. 
Some compilers put small arrays on the stack while others make them global or 
allocate them on the heap. In your case 0x6ABAD0 could either be somewhere in 
the BSS (where uninitialised global variables reside) or in the heap, which 
starts right after BSS (I would say it is the BSS). If the array is placed in 
BSS its location is fixed with respect to the image base.

Linux by default implements partial Address Space Layout Randomisation (ASLR) 
by placing the program stack at slightly different location with each run (this 
is to make remote stack based exploits harder). That's why you see different 
addresses for variables on the stack. But things in BSS would pretty much have 
the same addresses when the code is executed multiple times or on different 
machines having the same architecture and similar OS with similar settings 
since executable images are still loaded at the same base virtual address.

Having different addresses is not an issue for MPI as it only operates with 
pointers which are local to the process as well as with relative offsets. You 
pass the MPI_Send or MPI_Recv function the address of the data buffer in the 
current process and it has nothing to do with where those buffers are located 
in the other processes. Note also that MPI supports heterogeneous computing, 
e.g. the sending process might be 32-bit and the receiving one 64-bit. In this 
scenario it is quite probable that the addresses will differ by very large 
margin (e.g. the stack address of your 64-bit output is not even valid on 
32-bit system).

Hope that helps more :)

Kind regards,
Hristo

On 24.07.2012, at 02:02, Priyesh Srivastava wrote:

> hello  Hristo 
> 
> Thank you for your reply. I was able to understand some parts of your 
> response, but still had some doubts due to my lack of knowledge about the way 
> memory is allocated.
> 
> I have created a small sample program and the resulting output which will 
> help me  pin point my question.
> The program is : 
> 
>
> program test
>   include'mpif.h'
>   
>   integer a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
>   
>   integer(kind=MPI_ADDRESS_KIND) add(3)
> 
> 
>   call MPI_INIT(ierr)
>   call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr)
>   call MPI_GET_ADDRESS(a,add(1),ierr)
>   write(*,*) 'address of a ,id ', add(1), id
>   call MPI_GET_ADDRESS(b,add(2),ierr)
>   write(*,*) 'address of b,id ', add(2), id 
>   call MPI_GET_ADDRESS(c,add(3),ierr)
>   write(*,*) 'address of c,id ', add(3), id
> 
>   add(3)=add(3)-add(1)
>   add(2)=add(2)-add(1)
>   add(1)=add(1)-add(1)
>   
>   size(1)=1
>   size(2)=1
>   size(3)=10
>   type(1)=MPI_INTEGER
>   type(2)=MPI_INTEGER
>   type(3)=MPI_INTEGER
>   call MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr)
>   call MPI_TYPE_COMMIT(datatype,ierr)
>   
>   write(*,*) 'datatype ,id', datatype , id
>   write(*,*) ' relative add1 ',add(1), 'id',id
>   write(*,*) ' relative add2 ',add(2), 'id',id
>   write(*,*) ' relative add3 ',add(3), 'id',id
>   if(id==0) then
>   a = 1000
>   b=2000
>   do i=1,10
>   c(i)=i
>   end do
>   c(10)=700
>   c(1)=600
>   end if
> 
> 
> if(id==0) then
>   call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr)
>   end if
> 
>   if(id==1) then
>   call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
>   write(*,*) 'id =',id
>   write(*,*) 'a=' , a
>   write(*,*) 'b=' , b
>   do i=1,10
>   write(*,*) 'c(',i,')=',c(i)
>   end do
>   end if
>   
>   call MPI_FINALIZE(ierr)
>   end
>
> 
>  
> the output is :
> 
> 
>  address of a ,id140736841025492   0
>  address of b,id1407368410254960
>  address of c,id69946400
>  datatype ,id 58   0
>   relative add1  0   id  0
>   relative add2  4   id  0
>   relative add3 -140736834030852   id  0
>  address of a ,id140736078234324   1
>  address of b,id 140736078234328   1
>  address of c,id 6994640   1
>  datatype ,id 58   1
>   relative add1 0  id1
>   relative add2 4 id 1
>   relative add3   -140736071239684 id  1
>  id =   1
>  a=1000
>  b=2000
>  c( 1 )= 600
>  c( 2 

Re: [OMPI users] issue with addresses

2012-07-23 Thread Priyesh Srivastava
hello  Hristo

Thank you for your reply. I was able to understand some parts of your
response, but still had some doubts due to my lack of knowledge about the
way memory is allocated.

I have created a small sample program and the resulting output which will
help me  pin point my question.
The program is :

   *program test include'mpif.h' integer
a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
integer(kind=MPI_ADDRESS_KIND) add(3)*
* call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr) call
MPI_GET_ADDRESS(a,add(1),ierr) write(*,*) 'address of a ,id ', add(1), id
call MPI_GET_ADDRESS(b,add(2),ierr) write(*,*) 'address of b,id ', add(2),
id call MPI_GET_ADDRESS(c,add(3),ierr) write(*,*) 'address of c,id ',
add(3), id add(3)=add(3)-add(1) add(2)=add(2)-add(1) add(1)=add(1)-add(1)
size(1)=1 size(2)=1 size(3)=10 type(1)=MPI_INTEGER type(2)=MPI_INTEGER
type(3)=MPI_INTEGER call
MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr) call
MPI_TYPE_COMMIT(datatype,ierr) write(*,*) 'datatype ,id', datatype , id
write(*,*) ' relative add1 ',add(1), 'id',id write(*,*) ' relative add2
',add(2), 'id',id write(*,*) ' relative add3 ',add(3), 'id',id if(id==0)
then a = 1000 b=2000 do i=1,10 c(i)=i end do c(10)=700 c(1)=600 end if*
*
*
* if(id==0) then call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr) end if
if(id==1) then call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
write(*,*) 'id =',id write(*,*) 'a=' , a write(*,*) 'b=' , b do i=1,10
write(*,*) 'c(',i,')=',c(i) end do end if call MPI_FINALIZE(ierr) end*
*
*
* *
the output is *:*
*
*
* address of a ,id140736841025492   0*
* address of b,id1407368410254960*
* address of c,id69946400*
* datatype ,id 58   0*
*  relative add1  0   id  0*
*  relative add2  4   id  0*
*  relative add3 -140736834030852   id  0*
* address of a ,id140736078234324   1*
* address of b,id 140736078234328   1*
* address of c,id 6994640   1*
* datatype ,id 58   1*
*  relative add1 0  id1*
*  relative add2 4 id 1*
*  relative add3   -140736071239684 id  1*
* id =   1*
* a=1000*
* b=2000*
* c( 1 )= 600*
* c( 2 )=   2*
* c( 3 )=   3*
* c( 4 )=   4*
* c(5 )=5*
* c( 6 )=   6*
* c( 7 )=   7*
* c( 8 )=   8*
* c(9 )=9*
* c(10 )= 700*
*
*

As I had mentioned that the smaller address(of array c) is same for both
the processors. However the larger ones(of 'a' and 'b' ) are different.
This gets explained by what you had mentioned.

So the relative address of the array 'c ' with respect to 'a' is  different
for both the processors . The way I am passing data should not
work(specifically the passing of array 'c') but still everything is
correctly sent from processor 0 to 1. I have noticed that  this way of
sending non contiguous data is common but I am confused why this works.

thanks
priyesh
On Mon, Jul 23, 2012 at 12:00 PM,  wrote:

> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>1. Efficient polling for both incoming messages and  request
>   completion (Geoffrey Irving)
>2. checkpoint problem (=?gb2312?B?s8LLyQ==?=)
>3. Re: checkpoint problem (Reuti)
>4. Re: Re :Re:  OpenMP and OpenMPI Issue (Paul Kapinos)
>5. Re: issue with addresses (Iliev, Hristo)
>
>
> --
>
> Message: 1
> Date: Sun, 22 Jul 2012 15:01:09 -0700
> From: Geoffrey Irving 
> Subject: [OMPI users] Efficient polling for both incoming messages and
> request completion
> To: users 
> Message-ID:
> 

Re: [OMPI users] issue with addresses

2012-07-23 Thread Iliev, Hristo
Hello,

Placement of data in memory is highly implementation dependent. I assume you
are running on Linux. This OS’ libc (glibc) provides two different methods
for dynamic allocation of memory – heap allocation and anonymous mappings.
Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
(128 KiB by default, controllable by calls to “mallopt(3)”). Such
allocations end up at predictable memory addresses as long as all processes
in your MPI job allocate memory following exactly the same pattern. For
larger memory blocks malloc() uses private anonymous mappings which might
end at different locations in the virtual address space depending on how it
is being used.

What this has to do with your Fortran code? Fortran runtimes use malloc()
behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
ones. Small arrays are allocated on the stack usually and will mostly have
the same addresses unless some stack placement randomisation is in effect.

Hope that helps.

Kind regards,
Hristo

> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Priyesh Srivastava
> Sent: Saturday, July 21, 2012 10:00 PM
> To: us...@open-mpi.org
> Subject: [OMPI users] issue with addresses
>
> Hello 
>
> I am working on a mpi program. I have been printing the addresses of
different variables and arrays using the MPI_GET_ADDRESS command. What I
have > noticed is that all the processors are giving the same address of a
particular variable as long as the address is less than 2 GB size. When the
address of a > variable/ array is more than 2GB size different processors
are giving different addresses for the same variable. (I am working on a 64
bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
integers for getting the addresses).
>
> my question is that should all the processors give the same address for
same variables? If so then why is this not happening for variables with
larger addresses.
>
>
> thanks
> priyesh

--
Hristo Iliev, Ph.D. -- High Performance Computing
RWTH Aachen University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367


smime.p7s
Description: S/MIME cryptographic signature


[OMPI users] issue with addresses

2012-07-21 Thread Priyesh Srivastava
Hello

I am working on a mpi program. I have been printing the addresses of
different variables and arrays using the MPI_GET_ADDRESS command. What I
have noticed is that all the processors are giving the same address of a
particular variable as long as the address is less than 2 GB size. When the
address of a variable/ array is more than 2GB size different processors are
giving different addresses for the same variable. (I am working on a 64 bit
system and am using the new MPI Functions and MPI_ADDRESS_KIND integers for
getting the addresses).

my question is that should all the processors give the same address for
same variables? If so then why is this not happening for variables with
larger addresses.


thanks
priyesh