Re: [OMPI users] issue with addresses

2012-07-26 Thread Priyesh Srivastava
hello  Hristo

Thank you for taking a look at the program and the output.
The detailed explanation was very helpful. I also found out that the
signature of a derived datatype is the specific sequence of the primitive
datatypes and is independent of the displacements. So the differences in
the relative addresses will not cause a problem.

thanks again  :)
priyesh

On Wed, Jul 25, 2012 at 12:00 PM,  wrote:

> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>1. Re: issue with addresses (Iliev, Hristo)
>2. Re: Extent of Distributed Array Type? (George Bosilca)
>3. Re: Extent of Distributed Array Type? (Jeff Squyres)
>4. Re: Extent of Distributed Array Type? (Richard Shaw)
>5. Mpi_leave_pinned=1 is thread safe? (tmish...@jcity.maeda.co.jp)
>6. Re: Fortran90 Bindings (Kumar, Sudhir)
>7. Re: Fortran90 Bindings (Damien)
>
>
> --
>
> Message: 1
> Date: Tue, 24 Jul 2012 17:10:33 +
> From: "Iliev, Hristo" 
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users 
> Message-ID: <18d6fe2f-7a68-4d1a-94fe-c14058ba4...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, Priyesh,
>
> The output of your program is pretty much what one would expect.
> 140736841025492 is 0x7FFFD96A87D4 which pretty much corresponds to a
> location in the stack, which is to be expected as a and b are scalar
> variables and most likely end up on the stack. As c is array its location
> is compiler-dependent. Some compilers put small arrays on the stack while
> others make them global or allocate them on the heap. In your case 0x6ABAD0
> could either be somewhere in the BSS (where uninitialised global variables
> reside) or in the heap, which starts right after BSS (I would say it is the
> BSS). If the array is placed in BSS its location is fixed with respect to
> the image base.
>
> Linux by default implements partial Address Space Layout Randomisation
> (ASLR) by placing the program stack at slightly different location with
> each run (this is to make remote stack based exploits harder). That's why
> you see different addresses for variables on the stack. But things in BSS
> would pretty much have the same addresses when the code is executed
> multiple times or on different machines having the same architecture and
> similar OS with similar settings since executable images are still loaded
> at the same base virtual address.
>
> Having different addresses is not an issue for MPI as it only operates
> with pointers which are local to the process as well as with relative
> offsets. You pass the MPI_Send or MPI_Recv function the address of the data
> buffer in the current process and it has nothing to do with where those
> buffers are located in the other processes. Note also that MPI supports
> heterogeneous computing, e.g. the sending process might be 32-bit and the
> receiving one 64-bit. In this scenario it is quite probable that the
> addresses will differ by very large margin (e.g. the stack address of your
> 64-bit output is not even valid on 32-bit system).
>
> Hope that helps more :)
>
> Kind regards,
> Hristo
>
> On 24.07.2012, at 02:02, Priyesh Srivastava wrote:
>
> > hello  Hristo
> >
> > Thank you for your reply. I was able to understand some parts of your
> response, but still had some doubts due to my lack of knowledge about the
> way memory is allocated.
> >
> > I have created a small sample program and the resulting output which
> will help me  pin point my question.
> > The program is :
> >
> >
> > program test
> >   include'mpif.h'
> >
> >   integer a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
> >
> >   integer(kind=MPI_ADDRESS_KIND) add(3)
> >
> >
> >   call MPI_INIT(ierr)
> >   call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr)
> >   call MPI_GET_ADDRESS(a,add(1),ierr)
> >   write(*,*) 'address of a ,id ', add(1), id
> >   call MPI_GET_ADDRESS(b,add(2),ierr)
> >   write(*,*) 'address of b,id ', add(2), id
> >   call MPI_GET_ADDRESS(c,add(3),ierr)

Re: [OMPI users] issue with addresses

2012-07-24 Thread Iliev, Hristo
> > CHEN Song
> > R&D Department
> > National Supercomputer Center in Tianjin
> > Binhai New Area, Tianjin, China
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> 
> --
> 
> Message: 4
> Date: Mon, 23 Jul 2012 12:26:24 +0200
> From: Paul Kapinos 
> Subject: Re: [OMPI users] Re :Re:  OpenMP and OpenMPI Issue
> To: Open MPI Users 
> Message-ID: <500d26d0.4070...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
> 
> Jack,
> note that support for THREAD_MULTIPLE is available in [newer] versions of open
> MPI, but disabled by default. You have to enable it by configuring, in 1.6:
> 
>--enable-mpi-thread-multiple
>Enable MPI_THREAD_MULTIPLE support (default:
>disabled)
> 
> You may check the available threading supprt level by using the attaches 
> program.
> 
> 
> On 07/20/12 19:33, Jack Galloway wrote:
> > This is an old thread, and I'm curious if there is support now for this?  I 
> > have
> > a large code that I'm running, a hybrid MPI/OpenMP code, that is having 
> > trouble
> > over our infiniband network.  I'm running a fairly large problem (uses about
> > 18GB), and part way in, I get the following errors:
> 
> You say "big footprint"? I hear a bell ringing...
> http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
> 
> 
> 
> 
> 
> 
> 
> 
> --
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> -- next part --
> A non-text attachment was scrubbed...
> Name: mpi_threading_support.f
> Type: text/x-fortran
> Size: 411 bytes
> Desc: not available
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin>
> -- next part --
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4471 bytes
> Desc: S/MIME Cryptographic Signature
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin>
> 
> --
> 
> Message: 5
> Date: Mon, 23 Jul 2012 11:18:32 +
> From: "Iliev, Hristo" 
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users 
> Message-ID:
>     
> 
> 
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hello,
> 
> Placement of data in memory is highly implementation dependent. I assume you
> are running on Linux. This OS? libc (glibc) provides two different methods
> for dynamic allocation of memory ? heap allocation and anonymous mappings.
> Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
> (128 KiB by default, controllable by calls to ?mallopt(3)?). Such
> allocations end up at predictable memory addresses as long as all processes
> in your MPI job allocate memory following exactly the same pattern. For
> larger memory blocks malloc() uses private anonymous mappings which might
> end at different locations in the virtual address space depending on how it
> is being used.
> 
> What this has to do with your Fortran code? Fortran runtimes use malloc()
> behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
> ones. Small arrays are allocated on the stack usually and will mostly have
> the same addresses unless some stack placement randomisation is in effect.
> 
> Hope that helps.
> 
> Kind regards,
> Hristo
> 
> > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Priyesh Srivastava
> > Sent: Saturday, July 21, 2012 10:00 PM
> > To: us...@open-mpi.org
> > Subject: [OMPI users] issue with addresses
> >
> > Hello?
> >
> > I am working on a mpi program. I have been printing?the?addresses of
> different variables and arrays using the MPI_GET_ADDRESS command. What I
> have > noticed is that all the processors are giving the same address of a
> particular variable as long as the address is less than 2 GB size. When the
> address of a > variable/ array?is?more than 2GB size different processors
> are giving different addresses for the same variable. (I am working on a 64
> bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
> integers for getting?the?addresses).
> >
> > my question is that should?all?the processors give the same address for
> same variables? If so then why is this not happening for variables with
> larger addresses.
> >
> >
> > thanks
> > priyesh
> 
> --
> Hristo Iliev, Ph.D. -- High Performance Computing
> RWTH Aachen University, Center for Computing and Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
> -- next part --
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5494 bytes
> Desc: not available
> URL: 
> <http://www.open-mpi.org/MailArchives/users/attachments/20120723/abceb9c3/attachment.bin>
> 
> --
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> End of users Digest, Vol 2304, Issue 1
> **
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Hristo Iliev, Ph.D. -- High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI users] issue with addresses

2012-07-23 Thread Priyesh Srivastava
> request completion using only one thread?  As far as I know, busy
> waiting with alternate MPI_Iprobe and MPI_Testsome calls is the only
> way to do this.  Is that approach dangerous to do performance-wise?
>
> Background: my application is memory constrained, so when requests
> complete I may suddenly be able to schedule new computation.  At the
> same time, I need to be responding to a variety of asynchronous
> messages from unknown processors with unknown message sizes, which as
> far as I know I can't turn into a request to poll on.
>
> Thanks,
> Geoffrey
>
>
> --
>
> Message: 2
> Date: Mon, 23 Jul 2012 16:02:03 +0800
> From: "=?gb2312?B?s8LLyQ==?=" 
> Subject: [OMPI users] checkpoint problem
> To: "Open MPI Users" 
> Message-ID: <4b55b3e5fc79bad3009c21962e848...@nscc-tj.gov.cn>
> Content-Type: text/plain; charset="gb2312"
>
>  Hi all,How can I create ckpt files regularly? I mean, do checkpoint
> every 100 seconds. Is there any options to do this? Or I have to write a
> script myself?THANKS,---CHEN SongR&D DepartmentNational
> Supercomputer Center in TianjinBinhai New Area, Tianjin, China
> -- next part --
> HTML attachment scrubbed and removed
>
> --
>
> Message: 3
> Date: Mon, 23 Jul 2012 12:15:49 +0200
> From: Reuti 
> Subject: Re: [OMPI users] checkpoint problem
> To: ?? ,   Open MPI Users  >
> Message-ID:
> <623c01f7-8d8c-4dcf-aa47-2c3eded28...@staff.uni-marburg.de>
> Content-Type: text/plain; charset=GB2312
>
> Am 23.07.2012 um 10:02 schrieb :
>
> > How can I create ckpt files regularly? I mean, do checkpoint every 100
> seconds. Is there any options to do this? Or I have to write a script
> myself?
>
> Yes, or use a queuing system which supports creation of a checkpoint in
> fixed time intervals.
>
> -- Reuti
>
>
> > THANKS,
> >
> >
> >
> > ---
> > CHEN Song
> > R&D Department
> > National Supercomputer Center in Tianjin
> > Binhai New Area, Tianjin, China
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> --
>
> Message: 4
> Date: Mon, 23 Jul 2012 12:26:24 +0200
> From: Paul Kapinos 
> Subject: Re: [OMPI users] Re :Re:  OpenMP and OpenMPI Issue
> To: Open MPI Users 
> Message-ID: <500d26d0.4070...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Jack,
> note that support for THREAD_MULTIPLE is available in [newer] versions of
> open
> MPI, but disabled by default. You have to enable it by configuring, in 1.6:
>
>--enable-mpi-thread-multiple
>Enable MPI_THREAD_MULTIPLE support (default:
>disabled)
>
> You may check the available threading supprt level by using the attaches
> program.
>
>
> On 07/20/12 19:33, Jack Galloway wrote:
> > This is an old thread, and I'm curious if there is support now for this?
>  I have
> > a large code that I'm running, a hybrid MPI/OpenMP code, that is having
> trouble
> > over our infiniband network.  I'm running a fairly large problem (uses
> about
> > 18GB), and part way in, I get the following errors:
>
> You say "big footprint"? I hear a bell ringing...
> http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
>
>
>
>
>
>
>
>
> --
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> -- next part --
> A non-text attachment was scrubbed...
> Name: mpi_threading_support.f
> Type: text/x-fortran
> Size: 411 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin
> >
> -- next part --
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4471 bytes
> Desc: S/MIME Cryptographic Signature
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin
> >
>
> --
>
> Message: 5
> Date: Mon, 23 Jul 2012 11:18:32 +
> From: "Iliev, Hristo" 
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users 
> Messag

Re: [OMPI users] issue with addresses

2012-07-23 Thread Iliev, Hristo
Hello,

Placement of data in memory is highly implementation dependent. I assume you
are running on Linux. This OS’ libc (glibc) provides two different methods
for dynamic allocation of memory – heap allocation and anonymous mappings.
Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
(128 KiB by default, controllable by calls to “mallopt(3)”). Such
allocations end up at predictable memory addresses as long as all processes
in your MPI job allocate memory following exactly the same pattern. For
larger memory blocks malloc() uses private anonymous mappings which might
end at different locations in the virtual address space depending on how it
is being used.

What this has to do with your Fortran code? Fortran runtimes use malloc()
behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
ones. Small arrays are allocated on the stack usually and will mostly have
the same addresses unless some stack placement randomisation is in effect.

Hope that helps.

Kind regards,
Hristo

> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Priyesh Srivastava
> Sent: Saturday, July 21, 2012 10:00 PM
> To: us...@open-mpi.org
> Subject: [OMPI users] issue with addresses
>
> Hello 
>
> I am working on a mpi program. I have been printing the addresses of
different variables and arrays using the MPI_GET_ADDRESS command. What I
have > noticed is that all the processors are giving the same address of a
particular variable as long as the address is less than 2 GB size. When the
address of a > variable/ array is more than 2GB size different processors
are giving different addresses for the same variable. (I am working on a 64
bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
integers for getting the addresses).
>
> my question is that should all the processors give the same address for
same variables? If so then why is this not happening for variables with
larger addresses.
>
>
> thanks
> priyesh

--
Hristo Iliev, Ph.D. -- High Performance Computing
RWTH Aachen University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367


smime.p7s
Description: S/MIME cryptographic signature


[OMPI users] issue with addresses

2012-07-21 Thread Priyesh Srivastava
Hello

I am working on a mpi program. I have been printing the addresses of
different variables and arrays using the MPI_GET_ADDRESS command. What I
have noticed is that all the processors are giving the same address of a
particular variable as long as the address is less than 2 GB size. When the
address of a variable/ array is more than 2GB size different processors are
giving different addresses for the same variable. (I am working on a 64 bit
system and am using the new MPI Functions and MPI_ADDRESS_KIND integers for
getting the addresses).

my question is that should all the processors give the same address for
same variables? If so then why is this not happening for variables with
larger addresses.


thanks
priyesh