hello  Hristo

Thank you for your reply. I was able to understand some parts of your
response, but still had some doubts due to my lack of knowledge about the
way memory is allocated.

I have created a small sample program and the resulting output which will
help me  pin point my question.
The program is :

       *program test include'mpif.h' integer
a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
integer(kind=MPI_ADDRESS_KIND) add(3)*
* call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr) call
MPI_GET_ADDRESS(a,add(1),ierr) write(*,*) 'address of a ,id ', add(1), id
call MPI_GET_ADDRESS(b,add(2),ierr) write(*,*) 'address of b,id ', add(2),
id call MPI_GET_ADDRESS(c,add(3),ierr) write(*,*) 'address of c,id ',
add(3), id add(3)=add(3)-add(1) add(2)=add(2)-add(1) add(1)=add(1)-add(1)
size(1)=1 size(2)=1 size(3)=10 type(1)=MPI_INTEGER type(2)=MPI_INTEGER
type(3)=MPI_INTEGER call
MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr) call
MPI_TYPE_COMMIT(datatype,ierr) write(*,*) 'datatype ,id', datatype , id
write(*,*) ' relative add1 ',add(1), 'id',id write(*,*) ' relative add2
',add(2), 'id',id write(*,*) ' relative add3 ',add(3), 'id',id if(id==0)
then a = 1000 b=2000 do i=1,10 c(i)=i end do c(10)=700 c(1)=600 end if*
*
*
* if(id==0) then call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr) end if
if(id==1) then call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
write(*,*) 'id =',id write(*,*) 'a=' , a write(*,*) 'b=' , b do i=1,10
write(*,*) 'c(',i,')=',c(i) end do end if call MPI_FINALIZE(ierr) end*
*
*
* *
the output is *:*
*
*
* address of a ,id        140736841025492           0*
* address of b,id        140736841025496            0*
* address of c,id                        6994640            0*
* datatype ,id                                         58           0*
*  relative add1                                      0   id      0*
*  relative add2                                      4   id      0*
*  relative add3         -140736834030852   id      0*
* address of a ,id        140736078234324           1*
* address of b,id         140736078234328           1*
* address of c,id                         6994640           1*
* datatype ,id                                         58           1*
*  relative add1                                     0  id        1*
*  relative add2                                     4 id         1*
*  relative add3       -140736071239684 id          1*
* id =           1*
* a=        1000*
* b=        2000*
* c( 1 )=         600*
* c( 2 )=           2*
* c( 3 )=           3*
* c( 4 )=           4*
* c(5 )=            5*
* c( 6 )=           6*
* c( 7 )=           7*
* c( 8 )=           8*
* c(9 )=            9*
* c(10 )=         700*
*
*

As I had mentioned that the smaller address(of array c) is same for both
the processors. However the larger ones(of 'a' and 'b' ) are different.
This gets explained by what you had mentioned.

So the relative address of the array 'c ' with respect to 'a' is  different
for both the processors . The way I am passing data should not
work(specifically the passing of array 'c') but still everything is
correctly sent from processor 0 to 1. I have noticed that  this way of
sending non contiguous data is common but I am confused why this works.

thanks
priyesh
On Mon, Jul 23, 2012 at 12:00 PM, <users-requ...@open-mpi.org> wrote:

> Send users mailing list submissions to
>         us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>         users-requ...@open-mpi.org
>
> You can reach the person managing the list at
>         users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>    1. Efficient polling for both incoming messages and  request
>       completion (Geoffrey Irving)
>    2. checkpoint problem (=?gb2312?B?s8LLyQ==?=)
>    3. Re: checkpoint problem (Reuti)
>    4. Re: Re :Re:  OpenMP and OpenMPI Issue (Paul Kapinos)
>    5. Re: issue with addresses (Iliev, Hristo)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 22 Jul 2012 15:01:09 -0700
> From: Geoffrey Irving <irv...@naml.us>
> Subject: [OMPI users] Efficient polling for both incoming messages and
>         request completion
> To: users <us...@open-mpi.org>
> Message-ID:
>         <CAJ1ofpdNxSVD=_
> ffn1j3kn9ktzjgjehb0xjf3eyl76ajwvd...@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hello,
>
> Is it possible to efficiently poll for both incoming messages and
> request completion using only one thread?  As far as I know, busy
> waiting with alternate MPI_Iprobe and MPI_Testsome calls is the only
> way to do this.  Is that approach dangerous to do performance-wise?
>
> Background: my application is memory constrained, so when requests
> complete I may suddenly be able to schedule new computation.  At the
> same time, I need to be responding to a variety of asynchronous
> messages from unknown processors with unknown message sizes, which as
> far as I know I can't turn into a request to poll on.
>
> Thanks,
> Geoffrey
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 23 Jul 2012 16:02:03 +0800
> From: "=?gb2312?B?s8LLyQ==?=" <chens...@nscc-tj.gov.cn>
> Subject: [OMPI users] checkpoint problem
> To: "Open MPI Users" <us...@open-mpi.org>
> Message-ID: <4b55b3e5fc79bad3009c21962e848...@nscc-tj.gov.cn>
> Content-Type: text/plain; charset="gb2312"
>
> &nbsp;Hi all,How can I create ckpt files regularly? I mean, do checkpoint
> every 100 seconds. Is there any options to do this? Or I have to write a
> script myself?THANKS,---------------CHEN SongR&amp;D DepartmentNational
> Supercomputer Center in TianjinBinhai New Area, Tianjin, China
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 3
> Date: Mon, 23 Jul 2012 12:15:49 +0200
> From: Reuti <re...@staff.uni-marburg.de>
> Subject: Re: [OMPI users] checkpoint problem
> To: ?? <chens...@nscc-tj.gov.cn>,       Open MPI Users <us...@open-mpi.org
> >
> Message-ID:
>         <623c01f7-8d8c-4dcf-aa47-2c3eded28...@staff.uni-marburg.de>
> Content-Type: text/plain; charset=GB2312
>
> Am 23.07.2012 um 10:02 schrieb ????:
>
> > How can I create ckpt files regularly? I mean, do checkpoint every 100
> seconds. Is there any options to do this? Or I have to write a script
> myself?
>
> Yes, or use a queuing system which supports creation of a checkpoint in
> fixed time intervals.
>
> -- Reuti
>
>
> > THANKS,
> >
> >
> >
> > ---------------
> > CHEN Song
> > R&D Department
> > National Supercomputer Center in Tianjin
> > Binhai New Area, Tianjin, China
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 23 Jul 2012 12:26:24 +0200
> From: Paul Kapinos <kapi...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] Re :Re:  OpenMP and OpenMPI Issue
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <500d26d0.4070...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Jack,
> note that support for THREAD_MULTIPLE is available in [newer] versions of
> open
> MPI, but disabled by default. You have to enable it by configuring, in 1.6:
>
>    --enable-mpi-thread-multiple
>                            Enable MPI_THREAD_MULTIPLE support (default:
>                            disabled)
>
> You may check the available threading supprt level by using the attaches
> program.
>
>
> On 07/20/12 19:33, Jack Galloway wrote:
> > This is an old thread, and I'm curious if there is support now for this?
>  I have
> > a large code that I'm running, a hybrid MPI/OpenMP code, that is having
> trouble
> > over our infiniband network.  I'm running a fairly large problem (uses
> about
> > 18GB), and part way in, I get the following errors:
>
> You say "big footprint"? I hear a bell ringing...
> http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
>
>
>
>
>
>
>
>
> --
> Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241/80-24915
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: mpi_threading_support.f
> Type: text/x-fortran
> Size: 411 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4471 bytes
> Desc: S/MIME Cryptographic Signature
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin
> >
>
> ------------------------------
>
> Message: 5
> Date: Mon, 23 Jul 2012 11:18:32 +0000
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID:
>         <
> fdaa43115faf4a4f88865097fc2c3cc9030e2...@rz-mbx2.win.rz.rwth-aachen.de>
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
>
> Placement of data in memory is highly implementation dependent. I assume
> you
> are running on Linux. This OS? libc (glibc) provides two different methods
> for dynamic allocation of memory ? heap allocation and anonymous mappings.
> Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
> (128 KiB by default, controllable by calls to ?mallopt(3)?). Such
> allocations end up at predictable memory addresses as long as all processes
> in your MPI job allocate memory following exactly the same pattern. For
> larger memory blocks malloc() uses private anonymous mappings which might
> end at different locations in the virtual address space depending on how it
> is being used.
>
> What this has to do with your Fortran code? Fortran runtimes use malloc()
> behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
> ones. Small arrays are allocated on the stack usually and will mostly have
> the same addresses unless some stack placement randomisation is in effect.
>
> Hope that helps.
>
> Kind regards,
> Hristo
>
> > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Priyesh Srivastava
> > Sent: Saturday, July 21, 2012 10:00 PM
> > To: us...@open-mpi.org
> > Subject: [OMPI users] issue with addresses
> >
> > Hello?
> >
> > I am working on a mpi program. I have been printing?the?addresses of
> different variables and arrays using the MPI_GET_ADDRESS command. What I
> have > noticed is that all the processors are giving the same address of a
> particular variable as long as the address is less than 2 GB size. When the
> address of a > variable/ array?is?more than 2GB size different processors
> are giving different addresses for the same variable. (I am working on a 64
> bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
> integers for getting?the?addresses).
> >
> > my question is that should?all?the processors give the same address for
> same variables? If so then why is this not happening for variables with
> larger addresses.
> >
> >
> > thanks
> > priyesh
>
> --
> Hristo Iliev, Ph.D. -- High Performance Computing
> RWTH Aachen University, Center for Computing and Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5494 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/abceb9c3/attachment.bin
> >
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2304, Issue 1
> **************************************
>

Reply via email to