hello  Hristo

Thank you for taking a look at the program and the output.
The detailed explanation was very helpful. I also found out that the
signature of a derived datatype is the specific sequence of the primitive
datatypes and is independent of the displacements. So the differences in
the relative addresses will not cause a problem.

thanks again  :)
priyesh

On Wed, Jul 25, 2012 at 12:00 PM, <users-requ...@open-mpi.org> wrote:

> Send users mailing list submissions to
>         us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>         users-requ...@open-mpi.org
>
> You can reach the person managing the list at
>         users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>    1. Re: issue with addresses (Iliev, Hristo)
>    2. Re: Extent of Distributed Array Type? (George Bosilca)
>    3. Re: Extent of Distributed Array Type? (Jeff Squyres)
>    4. Re: Extent of Distributed Array Type? (Richard Shaw)
>    5. Mpi_leave_pinned=1 is thread safe? (tmish...@jcity.maeda.co.jp)
>    6. Re: Fortran90 Bindings (Kumar, Sudhir)
>    7. Re: Fortran90 Bindings (Damien)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 24 Jul 2012 17:10:33 +0000
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <18d6fe2f-7a68-4d1a-94fe-c14058ba4...@rz.rwth-aachen.de>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, Priyesh,
>
> The output of your program is pretty much what one would expect.
> 140736841025492 is 0x7FFFD96A87D4 which pretty much corresponds to a
> location in the stack, which is to be expected as a and b are scalar
> variables and most likely end up on the stack. As c is array its location
> is compiler-dependent. Some compilers put small arrays on the stack while
> others make them global or allocate them on the heap. In your case 0x6ABAD0
> could either be somewhere in the BSS (where uninitialised global variables
> reside) or in the heap, which starts right after BSS (I would say it is the
> BSS). If the array is placed in BSS its location is fixed with respect to
> the image base.
>
> Linux by default implements partial Address Space Layout Randomisation
> (ASLR) by placing the program stack at slightly different location with
> each run (this is to make remote stack based exploits harder). That's why
> you see different addresses for variables on the stack. But things in BSS
> would pretty much have the same addresses when the code is executed
> multiple times or on different machines having the same architecture and
> similar OS with similar settings since executable images are still loaded
> at the same base virtual address.
>
> Having different addresses is not an issue for MPI as it only operates
> with pointers which are local to the process as well as with relative
> offsets. You pass the MPI_Send or MPI_Recv function the address of the data
> buffer in the current process and it has nothing to do with where those
> buffers are located in the other processes. Note also that MPI supports
> heterogeneous computing, e.g. the sending process might be 32-bit and the
> receiving one 64-bit. In this scenario it is quite probable that the
> addresses will differ by very large margin (e.g. the stack address of your
> 64-bit output is not even valid on 32-bit system).
>
> Hope that helps more :)
>
> Kind regards,
> Hristo
>
> On 24.07.2012, at 02:02, Priyesh Srivastava wrote:
>
> > hello  Hristo
> >
> > Thank you for your reply. I was able to understand some parts of your
> response, but still had some doubts due to my lack of knowledge about the
> way memory is allocated.
> >
> > I have created a small sample program and the resulting output which
> will help me  pin point my question.
> > The program is :
> >
> >
> > program test
> >       include'mpif.h'
> >
> >       integer a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
> >
> >       integer(kind=MPI_ADDRESS_KIND) add(3)
> >
> >
> >       call MPI_INIT(ierr)
> >       call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr)
> >       call MPI_GET_ADDRESS(a,add(1),ierr)
> >       write(*,*) 'address of a ,id ', add(1), id
> >       call MPI_GET_ADDRESS(b,add(2),ierr)
> >       write(*,*) 'address of b,id ', add(2), id
> >       call MPI_GET_ADDRESS(c,add(3),ierr)
> >       write(*,*) 'address of c,id ', add(3), id
> >
> >       add(3)=add(3)-add(1)
> >       add(2)=add(2)-add(1)
> >       add(1)=add(1)-add(1)
> >
> >       size(1)=1
> >       size(2)=1
> >       size(3)=10
> >       type(1)=MPI_INTEGER
> >       type(2)=MPI_INTEGER
> >       type(3)=MPI_INTEGER
> >       call MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr)
> >       call MPI_TYPE_COMMIT(datatype,ierr)
> >
> >       write(*,*) 'datatype ,id', datatype , id
> >       write(*,*) ' relative add1 ',add(1), 'id',id
> >       write(*,*) ' relative add2 ',add(2), 'id',id
> >       write(*,*) ' relative add3 ',add(3), 'id',id
> >       if(id==0) then
> >       a = 1000
> >       b=2000
> >       do i=1,10
> >       c(i)=i
> >       end do
> >       c(10)=700
> >       c(1)=600
> >       end if
> >
> >
> >         if(id==0) then
> >       call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr)
> >       end if
> >
> >       if(id==1) then
> >       call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
> >       write(*,*) 'id =',id
> >       write(*,*) 'a=' , a
> >       write(*,*) 'b=' , b
> >       do i=1,10
> >       write(*,*) 'c(',i,')=',c(i)
> >       end do
> >       end if
> >
> >       call MPI_FINALIZE(ierr)
> >       end
> >
> >
> >
> > the output is :
> >
> >
> >  address of a ,id        140736841025492           0
> >  address of b,id        140736841025496            0
> >  address of c,id                        6994640            0
> >  datatype ,id                                         58           0
> >   relative add1                                      0   id      0
> >   relative add2                                      4   id      0
> >   relative add3         -140736834030852   id      0
> >  address of a ,id        140736078234324           1
> >  address of b,id         140736078234328           1
> >  address of c,id                         6994640           1
> >  datatype ,id                                         58           1
> >   relative add1                                     0  id        1
> >   relative add2                                     4 id         1
> >   relative add3       -140736071239684 id          1
> >  id =           1
> >  a=        1000
> >  b=        2000
> >  c( 1 )=         600
> >  c( 2 )=           2
> >  c( 3 )=           3
> >  c( 4 )=           4
> >  c(5 )=            5
> >  c( 6 )=           6
> >  c( 7 )=           7
> >  c( 8 )=           8
> >  c(9 )=            9
> >  c(10 )=         700
> >
> >
> >
> > As I had mentioned that the smaller address(of array c) is same for both
> the processors. However the larger ones(of 'a' and 'b' ) are different.
> This gets explained by what you had mentioned.
> >
> > So the relative address of the array 'c ' with respect to 'a' is
>  different for both the processors . The way I am passing data should not
> work(specifically the passing of array 'c') but still everything is
> correctly sent from processor 0 to 1. I have noticed that  this way of
> sending non contiguous data is common but I am confused why this works.
> >
> > thanks
> > priyesh
> > On Mon, Jul 23, 2012 at 12:00 PM, <users-requ...@open-mpi.org> wrote:
> > Send users mailing list submissions to
> >         us...@open-mpi.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >         http://www.open-mpi.org/mailman/listinfo.cgi/users
> > or, via email, send a message with subject or body 'help' to
> >         users-requ...@open-mpi.org
> >
> > You can reach the person managing the list at
> >         users-ow...@open-mpi.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of users digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Efficient polling for both incoming messages and  request
> >       completion (Geoffrey Irving)
> >    2. checkpoint problem (=?gb2312?B?s8LLyQ==?=)
> >    3. Re: checkpoint problem (Reuti)
> >    4. Re: Re :Re:  OpenMP and OpenMPI Issue (Paul Kapinos)
> >    5. Re: issue with addresses (Iliev, Hristo)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Sun, 22 Jul 2012 15:01:09 -0700
> > From: Geoffrey Irving <irv...@naml.us>
> > Subject: [OMPI users] Efficient polling for both incoming messages and
> >         request completion
> > To: users <us...@open-mpi.org>
> > Message-ID:
> >         <CAJ1ofpdNxSVD=_
> ffn1j3kn9ktzjgjehb0xjf3eyl76ajwvd...@mail.gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Hello,
> >
> > Is it possible to efficiently poll for both incoming messages and
> > request completion using only one thread?  As far as I know, busy
> > waiting with alternate MPI_Iprobe and MPI_Testsome calls is the only
> > way to do this.  Is that approach dangerous to do performance-wise?
> >
> > Background: my application is memory constrained, so when requests
> > complete I may suddenly be able to schedule new computation.  At the
> > same time, I need to be responding to a variety of asynchronous
> > messages from unknown processors with unknown message sizes, which as
> > far as I know I can't turn into a request to poll on.
> >
> > Thanks,
> > Geoffrey
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Mon, 23 Jul 2012 16:02:03 +0800
> > From: "=?gb2312?B?s8LLyQ==?=" <chens...@nscc-tj.gov.cn>
> > Subject: [OMPI users] checkpoint problem
> > To: "Open MPI Users" <us...@open-mpi.org>
> > Message-ID: <4b55b3e5fc79bad3009c21962e848...@nscc-tj.gov.cn>
> > Content-Type: text/plain; charset="gb2312"
> >
> > &nbsp;Hi all,How can I create ckpt files regularly? I mean, do
> checkpoint every 100 seconds. Is there any options to do this? Or I have to
> write a script myself?THANKS,---------------CHEN SongR&amp;D
> DepartmentNational Supercomputer Center in TianjinBinhai New Area, Tianjin,
> China
> > -------------- next part --------------
> > HTML attachment scrubbed and removed
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Mon, 23 Jul 2012 12:15:49 +0200
> > From: Reuti <re...@staff.uni-marburg.de>
> > Subject: Re: [OMPI users] checkpoint problem
> > To: ?? <chens...@nscc-tj.gov.cn>,       Open MPI Users <
> us...@open-mpi.org>
> > Message-ID:
> >         <623c01f7-8d8c-4dcf-aa47-2c3eded28...@staff.uni-marburg.de>
> > Content-Type: text/plain; charset=GB2312
> >
> > Am 23.07.2012 um 10:02 schrieb ????:
> >
> > > How can I create ckpt files regularly? I mean, do checkpoint every 100
> seconds. Is there any options to do this? Or I have to write a script
> myself?
> >
> > Yes, or use a queuing system which supports creation of a checkpoint in
> fixed time intervals.
> >
> > -- Reuti
> >
> >
> > > THANKS,
> > >
> > >
> > >
> > > ---------------
> > > CHEN Song
> > > R&D Department
> > > National Supercomputer Center in Tianjin
> > > Binhai New Area, Tianjin, China
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Mon, 23 Jul 2012 12:26:24 +0200
> > From: Paul Kapinos <kapi...@rz.rwth-aachen.de>
> > Subject: Re: [OMPI users] Re :Re:  OpenMP and OpenMPI Issue
> > To: Open MPI Users <us...@open-mpi.org>
> > Message-ID: <500d26d0.4070...@rz.rwth-aachen.de>
> > Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
> >
> > Jack,
> > note that support for THREAD_MULTIPLE is available in [newer] versions
> of open
> > MPI, but disabled by default. You have to enable it by configuring, in
> 1.6:
> >
> >    --enable-mpi-thread-multiple
> >                            Enable MPI_THREAD_MULTIPLE support (default:
> >                            disabled)
> >
> > You may check the available threading supprt level by using the attaches
> program.
> >
> >
> > On 07/20/12 19:33, Jack Galloway wrote:
> > > This is an old thread, and I'm curious if there is support now for
> this?  I have
> > > a large code that I'm running, a hybrid MPI/OpenMP code, that is
> having trouble
> > > over our infiniband network.  I'm running a fairly large problem (uses
> about
> > > 18GB), and part way in, I get the following errors:
> >
> > You say "big footprint"? I hear a bell ringing...
> > http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> > Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
> > RWTH Aachen University, Center for Computing and Communication
> > Seffenter Weg 23,  D 52074  Aachen (Germany)
> > Tel: +49 241/80-24915
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: mpi_threading_support.f
> > Type: text/x-fortran
> > Size: 411 bytes
> > Desc: not available
> > URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin
> >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: smime.p7s
> > Type: application/pkcs7-signature
> > Size: 4471 bytes
> > Desc: S/MIME Cryptographic Signature
> > URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin
> >
> >
> > ------------------------------
> >
> > Message: 5
> > Date: Mon, 23 Jul 2012 11:18:32 +0000
> > From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> > Subject: Re: [OMPI users] issue with addresses
> > To: Open MPI Users <us...@open-mpi.org>
> > Message-ID:
> >         <
> fdaa43115faf4a4f88865097fc2c3cc9030e2...@rz-mbx2.win.rz.rwth-aachen.de>
> >
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hello,
> >
> > Placement of data in memory is highly implementation dependent. I assume
> you
> > are running on Linux. This OS? libc (glibc) provides two different
> methods
> > for dynamic allocation of memory ? heap allocation and anonymous
> mappings.
> > Heap allocation is used for small data up to MMAP_TRESHOLD bytes in
> length
> > (128 KiB by default, controllable by calls to ?mallopt(3)?). Such
> > allocations end up at predictable memory addresses as long as all
> processes
> > in your MPI job allocate memory following exactly the same pattern. For
> > larger memory blocks malloc() uses private anonymous mappings which might
> > end at different locations in the virtual address space depending on how
> it
> > is being used.
> >
> > What this has to do with your Fortran code? Fortran runtimes use malloc()
> > behind the scenes to allocate automatic heap arrays as well as
> ALLOCATABLE
> > ones. Small arrays are allocated on the stack usually and will mostly
> have
> > the same addresses unless some stack placement randomisation is in
> effect.
> >
> > Hope that helps.
> >
> > Kind regards,
> > Hristo
> >
> > > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
> On
> > Behalf Of Priyesh Srivastava
> > > Sent: Saturday, July 21, 2012 10:00 PM
> > > To: us...@open-mpi.org
> > > Subject: [OMPI users] issue with addresses
> > >
> > > Hello?
> > >
> > > I am working on a mpi program. I have been printing?the?addresses of
> > different variables and arrays using the MPI_GET_ADDRESS command. What I
> > have > noticed is that all the processors are giving the same address of
> a
> > particular variable as long as the address is less than 2 GB size. When
> the
> > address of a > variable/ array?is?more than 2GB size different processors
> > are giving different addresses for the same variable. (I am working on a
> 64
> > bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
> > integers for getting?the?addresses).
> > >
> > > my question is that should?all?the processors give the same address for
> > same variables? If so then why is this not happening for variables with
> > larger addresses.
> > >
> > >
> > > thanks
> > > priyesh
> >
> > --
> > Hristo Iliev, Ph.D. -- High Performance Computing
> > RWTH Aachen University, Center for Computing and Communication
> > Rechen- und Kommunikationszentrum der RWTH Aachen
> > Seffenter Weg 23,  D 52074  Aachen (Germany)
> > Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: smime.p7s
> > Type: application/pkcs7-signature
> > Size: 5494 bytes
> > Desc: not available
> > URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/abceb9c3/attachment.bin
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > End of users Digest, Vol 2304, Issue 1
> > **************************************
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> --
> Hristo Iliev, Ph.D. -- High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
>
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4389 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120724/b8c0b538/attachment.bin
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 25 Jul 2012 00:28:19 +0200
> From: George Bosilca <bosi...@eecs.utk.edu>
> Subject: Re: [OMPI users] Extent of Distributed Array Type?
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <5d76fa7f-e7a8-4d4e-a109-523d7b492...@eecs.utk.edu>
> Content-Type: text/plain; charset="us-ascii"
>
> Richard,
>
> Thanks for identifying this issue and for the short example. I can confirm
> your original understanding was right, the upper bound should be identical
> on all ranks. I just pushed a patch (r26862), let me know if this fixes
> your issue.
>
>   Thanks,
>     george.
>
> On Jul 24, 2012, at 17:27 , Richard Shaw wrote:
>
> > I've been speaking off line to Jonathan Dursi about this problem. And it
> does seems to be a bug.
> >
> > The same problem crops up in a simplified 1d only case (test case
> attached). In this instance the specification seems to be comprehensible -
> looking at the pdf copy of MPI-2.2 spec, p92-93, the definition of cyclic
> gives MPI_LB=0, MPI_UB=gsize*ex.
> >
> > Test case is creating a data type for an array of 10 doubles, cyclicly
> distributed across two processes with a block size of 1. Expected extent is
> 10*extent(MPI_DOUBLE) = 80. Results for OpenMPI v 1.4.4:
> >
> > $ mpirun -np 2 ./testextent1d
> > Rank 0, size=40, extent=80, lb=0
> > Rank 1, size=40, extent=88, lb=0
> >
> >
> > Can anyone else confirm this?
> >
> > Thanks
> > Richard
> >
> > On Sunday, 15 July, 2012 at 6:21 PM, Richard Shaw wrote:
> >
> >> Hello,
> >>
> >> I'm getting thoroughly confused trying to work out what is the correct
> extent of a block-cyclic distributed array type (created with
> MPI_Type_create_darray), and I'm hoping someone can clarify it for me.
> >>
> >> My expectation is that calling MPI_Get_extent on this type should
> return the size of the original, global, array in bytes, whereas
> MPI_Type_size gives the size of the local section. This isn't really clear
> from the MPI 2.2 spec, but from reading around it sound like that's the
> obvious thing to expect.
> >>
> >> I've attached a minimal C example which tests this behaviour, it
> creates a type which views a 10x10 array of doubles, in 3x3 blocks with a
> 2x2 process grid. So my expectation is that the extent is
> 10*10*sizeof(double) = 800. I've attached the results from running this
> below.
> >>
> >> In practice both versions of OpenMPI (v1.4.4 and v1.6) I've tested
> don't give the behaviour I expect. It gives the correct type size on all
> processes, but only the rank 0 process gets the expected extent, all the
> others get a somewhat higher value. As a comparison IntelMPI (v4.0.3) does
> give the expected value for the extent (included below).
> >>
> >> I'd be very grateful if someone could explain what the extent means for
> a darray type? And why it isn't the global array size?
> >>
> >> Thanks,
> >> Richard
> >>
> >>
> >>
> >> == OpenMPI (v1.4.4 and 1.6) ==
> >>
> >> $ mpirun -np 4 ./testextent
> >> Rank 0, size=288, extent=800, lb=0
> >> Rank 1, size=192, extent=824, lb=0
> >> Rank 2, size=192, extent=1040, lb=0
> >> Rank 3, size=128, extent=1064, lb=0
> >>
> >>
> >>
> >> == IntelMPI ==
> >>
> >> $ mpirun -np 4 ./testextent
> >> Rank 0, size=288, extent=800, lb=0
> >> Rank 1, size=192, extent=800, lb=0
> >> Rank 2, size=192, extent=800, lb=0
> >> Rank 3, size=128, extent=800, lb=0
> >>
> >> Attachments:
> >> - testextent.c
> >
> > <testextent1d.c>_______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 3
> Date: Tue, 24 Jul 2012 18:31:36 -0400
> From: Jeff Squyres <jsquy...@cisco.com>
> Subject: Re: [OMPI users] Extent of Distributed Array Type?
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <e5dd4476-970f-402a-b526-8e64029f0...@cisco.com>
> Content-Type: text/plain; charset=us-ascii
>
> On Jul 24, 2012, at 6:28 PM, George Bosilca wrote:
>
> > Thanks for identifying this issue and for the short example. I can
> confirm your original understanding was right, the upper bound should be
> identical on all ranks. I just pushed a patch (r26862), let me know if this
> fixes your issue.
>
> Note that this patch is on the OMPI SVN trunk.  You can either build
> directly from an SVN checkout or grab a nightly tarball here (get any r
> number >= 26862, obviously, which will be tonight around 10pm US Eastern
> time at the earliest):
>
>     http://www.open-mpi.org/nightly/trunk/
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 24 Jul 2012 19:02:34 -0400
> From: Richard Shaw <jr...@cita.utoronto.ca>
> Subject: Re: [OMPI users] Extent of Distributed Array Type?
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <f1689c9ee55c49da87e63ffb2a425...@gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thanks George, I'm glad it wasn't just me being crazy. I'll try and test
> that one soon.
>
> Cheers,
> Richard
>
>
> On Tuesday, 24 July, 2012 at 6:28 PM, George Bosilca wrote:
>
> > Richard,
> >
> > Thanks for identifying this issue and for the short example. I can
> confirm your original understanding was right, the upper bound should be
> identical on all ranks. I just pushed a patch (r26862), let me know if this
> fixes your issue.
> >
> >   Thanks,
> >     george.
> >
> >
>
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 5
> Date: Wed, 25 Jul 2012 14:14:20 +0900
> From: tmish...@jcity.maeda.co.jp
> Subject: [OMPI users] Mpi_leave_pinned=1 is thread safe?
> To: us...@open-mpi.org
> Message-ID:
>         <
> of5312e466.cdfdabb2-on49257a46.001a10be-49257a46.001cf...@jcity.maeda.co.jp
> >
>
> Content-Type: text/plain; charset=ISO-2022-JP
>
>
> Dear openmpi developers,
> I have been developing our hybrid(MPI+OpenMP) application using openmpi
> for five years.
>
> This time, I tyied to install a new function, which is c++ based multi-
> threaded library and it heavily repeats new and delete objects in each
> thread.
>
> Our application is so called "MPI_THREAD_FUNNELED", and openmpi-1.6
> is built using --with-tm --with-openib --disable-ipv6.
>
> My trouble is that it works very well with "--mca mpi_leave_pinned 0"
> but, when mpi_leave_pinned is enabled, it often causes segfault like below.
>
> I note that it works fine on Windows multi-threaded platform combined
> with mpich2. Furthermore, regarding multi-thread(none MPI) version,
> it also works fine enven on linux environment.
>
> #0  0x00002b36f1ab35fa in malloc_consolidate (av=0x2aaab0c00020)
> at ./malloc.c:4556
> #1  0x00002b36f1ab34d9 in opal_memory_ptmalloc2_int_free
> (av=0x2aaab0c00020, mem=0x2aaab0c00a70) at ./malloc.c:4453
> #2  0x00002b36f1ab1ce2 in opal_memory_ptmalloc2_free (mem=0x2aaab0c00a70)
> at ./malloc.c:3511
> #3  0x00002b36f1ab0ca9 in opal_memory_linux_free_hook
> (__ptr=0x2aaab0c00a70, caller=0xa075c8) at ./hooks.c:705
> #4  0x00000037b4a758a7 in free () from /lib64/libc.so.6
> #5  0x0000000000a075c8 in CErrorReporter<std::basic_ostringstream<char,
> std::char_traits<char>, std::allocator<char> > >
> ::Clear ()
> #6  0x0000000000a01eec in IPhreeqc::AccumulateLine ()
> #7  0x0000000000a01180 in AccumulateLine ()
> #8  0x0000000000a0078e in accumulatelinef_ ()
> #9  0x0000000000576ce6 in initial_conditions_ () at ./PHREEQC-model.f:307
> #10 0x0000000000577b3a in iphreeqc_main_ () at ./PHREEQC-model.f:505
> #11 0x0000000000577fa1 in basicphreeqc_ () at ./PHREEQC-model.f:944
> #12 0x00000000004b492a in phrqbl_ () at ./MULTI-COM.f:8371
> #13 0x00000000004aa6e9 in smxmknp:qois_ () at ./MULTI-COM.f:5112
> #14 0x00000000004a2c5e in solvenpois_ () at ./MULTI-COM.f:4276
> #15 0x000000000049e731 in solducom_ () at ./MULTI-COM.f:3782
> #16 0x000000000048b60c in MAIN () at ./MULTI-COM.f:1208
> #17 0x0000000000481350 in main ()
> #18 0x00000037b4a1d974 in __libc_start_main () from /lib64/libc.so.6
> #19 0x0000000000481259 in _start ()
>
> Best regard,
> Tetsuya Mishima
>
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 25 Jul 2012 14:55:03 +0000
> From: "Kumar, Sudhir" <k...@chevron.com>
> Subject: Re: [OMPI users] Fortran90 Bindings
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID:
>         <
> 8a9547392e2eb443894af275470df5e31a329...@hou150w8xmbx02.hou150.chevrontexaco.net
> >
>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi
> I have one more related question. Is the F77 bindings available for both
> 64bit and 32 bit windows environments or just for the 32 bit environment.
> Thanks
>
>
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Damien
> Sent: Wednesday, July 18, 2012 10:11 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Fortran90 Bindings
>
> Hmmm.  6 months ago there weren't F90 bindings in the Windows version (the
> F90 bindings are large and tricky).  It's an option you can select when you
> compile it yourself, but looking at the one I just did a month ago, there's
> still no mpif90.exe built, so I'd say that's still not supported on
> Windows.  :-(
>
> Damien
> On 18/07/2012 9:00 AM, Kumar, Sudhir wrote:
> Hi had meant to say if Fortran90 bindings  for Windows
>
> Sudhir Kumar
>
>
> From: users-boun...@open-mpi.org<mailto:users-boun...@open-mpi.org>
> [mailto:users-boun...@open-mpi.org] On Behalf Of Damien
> Sent: Wednesday, July 18, 2012 9:56 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Fortran90 Bindings
>
> Yep.
> On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:
> Hi
> Just wondering if Fortran90 bindings are available for OpemMPI 1.6
> Thanks
>
> Sudhir Kumar
>
>
>
>
>
>
> _______________________________________________
>
> users mailing list
>
> us...@open-mpi.org<mailto:us...@open-mpi.org>
>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
>
> _______________________________________________
>
> users mailing list
>
> us...@open-mpi.org<mailto:us...@open-mpi.org>
>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 7
> Date: Wed, 25 Jul 2012 09:51:32 -0600
> From: Damien <dam...@khubla.com>
> Subject: Re: [OMPI users] Fortran90 Bindings
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <50101604.5030...@khubla.com>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Sudhir,
>
> F77 works on both.
>
> Damien
>
>
> On 25/07/2012 8:55 AM, Kumar, Sudhir wrote:
> >
> > Hi
> >
> > I have one more related question. Is the F77 bindings available for
> > both 64bit and 32 bit windows environments or just for the 32 bit
> > environment.
> >
> > Thanks
> >
> > *From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
> > *On Behalf Of *Damien
> > *Sent:* Wednesday, July 18, 2012 10:11 AM
> > *To:* Open MPI Users
> > *Subject:* Re: [OMPI users] Fortran90 Bindings
> >
> > Hmmm.  6 months ago there weren't F90 bindings in the Windows version
> > (the F90 bindings are large and tricky).  It's an option you can
> > select when you compile it yourself, but looking at the one I just did
> > a month ago, there's still no mpif90.exe built, so I'd say that's
> > still not supported on Windows.  :-(
> >
> > Damien
> >
> > On 18/07/2012 9:00 AM, Kumar, Sudhir wrote:
> >
> >     Hi had meant to say if Fortran90 bindings for Windows
> >
> >     *Sudhir Kumar*
> >
> >     *From:*users-boun...@open-mpi.org
> >     <mailto:users-boun...@open-mpi.org>
> >     [mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien
> >     *Sent:* Wednesday, July 18, 2012 9:56 AM
> >     *To:* Open MPI Users
> >     *Subject:* Re: [OMPI users] Fortran90 Bindings
> >
> >     Yep.
> >
> >     On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:
> >
> >         Hi
> >
> >         Just wondering if Fortran90 bindings are available for OpemMPI
> 1.6
> >
> >         Thanks
> >
> >         *Sudhir Kumar*
> >
> >
> >
> >
> >
> >         _______________________________________________
> >
> >         users mailing list
> >
> >         us...@open-mpi.org  <mailto:us...@open-mpi.org>
> >
> >         http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> >
> >     _______________________________________________
> >
> >     users mailing list
> >
> >     us...@open-mpi.org  <mailto:us...@open-mpi.org>
> >
> >     http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2306, Issue 1
> **************************************
>

Reply via email to