Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-10 Thread Udayanga Wickramasinghe
I actually have a use case where my library will attach many
non-overlapping vm segments on demand to a single dynamic OMPI_Win_t
object. With the current static limit, I would either have to increase it
optimistically before startup or maintain a pool of dynamic win objects.
However, other MPI implementations (Cray-MPI2.2,  mvapich2) I tested do not
appear to have this constraint.

Regards,
Udayanga

On Thu, Jan 10, 2019 at 4:09 AM Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Jeff,
>
> At first glance, a comment in the code suggests the rationale is to
> minimize the number of allocations and hence the time spent registering the
> memory.
>
> Cheers,
>
> Gilles
>
> Jeff Hammond  wrote:
> Why is this allocated statically? I dont understand the difficulty of a
> dynamically allocates and thus unrestricted implementation. Is there some
> performance advantage to a bounded static allocation?  Or is it that you
> use O(n) lookups and need to keep n small to avoid exposing that to users?
>
> I have usage models with thousands of attached segments, hence need to
> understand how bad this will be with Open-MPI (yes I can amortize overhead
> but it’s a pain).
>
> Thanks,
>
> Jeff
>
> On Wed, Jan 9, 2019 at 8:12 AM Nathan Hjelm via users <
> users@lists.open-mpi.org> wrote:
>
>> If you need to support more attachments you can set the value of that
>> variable either by setting:
>>
>> Environment:
>>
>> OMPI_MCA_osc_rdma_max_attach
>>
>>
>> mpirun command line:
>>
>> —mca osc_rdma_max_attach
>>
>>
>> Keep in mind that each attachment may use an underlying hardware resource
>> that may be easy to exhaust (hence the low default limit). It is
>> recommended to keep the total number as small as possible.
>>
>> -Nathan
>>
>> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe 
>> wrote:
>> >
>> > Hi,
>> > I am running into an issue in open-mpi where it crashes abruptly during
>> MPI_WIN_ATTACH.
>> > [nid00307:25463] *** An error occurred in MPI_Win_attach
>> > [nid00307:25463] *** reported by process
>> [140736284524545,140728898420736]
>> > [nid00307:25463] *** on win rdma window 3
>> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
>> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will
>> now abort,
>> > [nid00307:25463] ***and potentially your MPI job)
>> >
>> > Looking more into this issue, it seems like open-mpi has a restriction
>> on the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
>> spec doesn't say a lot about this scenario --"The argument win must be a
>> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
>> nonoverlapping) memory regions may be attached to the same window")
>> >
>> > To workaround this, I have temporarily modified the variable
>> mca_osc_rdma_component.max_attach. Is there any way to configure this in
>> open-mpi?
>> >
>> > Thanks
>> > Udayanga
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-10 Thread Gilles Gouaillardet
Jeff,

At first glance, a comment in the code suggests the rationale is to minimize 
the number of allocations and hence the time spent registering the memory.

Cheers,

Gilles

Jeff Hammond  wrote:
>Why is this allocated statically? I dont understand the difficulty of a 
>dynamically allocates and thus unrestricted implementation. Is there some 
>performance advantage to a bounded static allocation?  Or is it that you use 
>O(n) lookups and need to keep n small to avoid exposing that to users?
>
>
>I have usage models with thousands of attached segments, hence need to 
>understand how bad this will be with Open-MPI (yes I can amortize overhead but 
>it’s a pain).
>
>
>Thanks,
>
>
>Jeff
>
>
>On Wed, Jan 9, 2019 at 8:12 AM Nathan Hjelm via users 
> wrote:
>
>If you need to support more attachments you can set the value of that variable 
>either by setting:
>
>Environment:
>
>OMPI_MCA_osc_rdma_max_attach
>
>
>mpirun command line:
>
>—mca osc_rdma_max_attach
>
>
>Keep in mind that each attachment may use an underlying hardware resource that 
>may be easy to exhaust (hence the low default limit). It is recommended to 
>keep the total number as small as possible.
>
>-Nathan
>
>> On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe  wrote:
>> 
>> Hi,
>> I am running into an issue in open-mpi where it crashes abruptly during 
>> MPI_WIN_ATTACH. 
>> [nid00307:25463] *** An error occurred in MPI_Win_attach
>> [nid00307:25463] *** reported by process [140736284524545,140728898420736]
>> [nid00307:25463] *** on win rdma window 3
>> [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
>> [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now 
>> abort,
>> [nid00307:25463] ***    and potentially your MPI job)
>> 
>> Looking more into this issue, it seems like open-mpi has a restriction on 
>> the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't spec 
>> doesn't say a lot about this scenario --"The argument win must be a window 
>> that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but nonoverlapping) 
>> memory regions may be attached to the same window")
>> 
>> To workaround this, I have temporarily modified the variable 
>> mca_osc_rdma_component.max_attach. Is there any way to configure this in 
>> open-mpi?
>> 
>> Thanks
>> Udayanga
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>
>___
>users mailing list
>users@lists.open-mpi.org
>https://lists.open-mpi.org/mailman/listinfo/users
>
>-- 
>
>Jeff Hammond
>jeff.scie...@gmail.com
>http://jeffhammond.github.io/
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Jeff Hammond
Why is this allocated statically? I dont understand the difficulty of a
dynamically allocates and thus unrestricted implementation. Is there some
performance advantage to a bounded static allocation?  Or is it that you
use O(n) lookups and need to keep n small to avoid exposing that to users?

I have usage models with thousands of attached segments, hence need to
understand how bad this will be with Open-MPI (yes I can amortize overhead
but it’s a pain).

Thanks,

Jeff

On Wed, Jan 9, 2019 at 8:12 AM Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:

> If you need to support more attachments you can set the value of that
> variable either by setting:
>
> Environment:
>
> OMPI_MCA_osc_rdma_max_attach
>
>
> mpirun command line:
>
> —mca osc_rdma_max_attach
>
>
> Keep in mind that each attachment may use an underlying hardware resource
> that may be easy to exhaust (hence the low default limit). It is
> recommended to keep the total number as small as possible.
>
> -Nathan
>
> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe 
> wrote:
> >
> > Hi,
> > I am running into an issue in open-mpi where it crashes abruptly during
> MPI_WIN_ATTACH.
> > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > [nid00307:25463] *** reported by process
> [140736284524545,140728898420736]
> > [nid00307:25463] *** on win rdma window 3
> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will
> now abort,
> > [nid00307:25463] ***and potentially your MPI job)
> >
> > Looking more into this issue, it seems like open-mpi has a restriction
> on the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
> >
> > To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
> >
> > Thanks
> > Udayanga
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Udayanga Wickramasinghe
Thanks, I think that will be very useful.

Best,
Udayanga


On Wed, Jan 9, 2019 at 1:39 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:

> You can set this MCA var on a site-wide basis in a file:
>
> https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
>
>
>
> > On Jan 9, 2019, at 1:18 PM, Udayanga Wickramasinghe 
> wrote:
> >
> > Thanks. Yes, I am aware of that however, I currently have a requirement
> to increase the default.
> >
> > Best,
> > Udayanga
> >
> > On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users <
> users@lists.open-mpi.org> wrote:
> > If you need to support more attachments you can set the value of that
> variable either by setting:
> >
> > Environment:
> >
> > OMPI_MCA_osc_rdma_max_attach
> >
> >
> > mpirun command line:
> >
> > —mca osc_rdma_max_attach
> >
> >
> > Keep in mind that each attachment may use an underlying hardware
> resource that may be easy to exhaust (hence the low default limit). It is
> recommended to keep the total number as small as possible.
> >
> > -Nathan
> >
> > > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe 
> wrote:
> > >
> > > Hi,
> > > I am running into an issue in open-mpi where it crashes abruptly
> during MPI_WIN_ATTACH.
> > > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > > [nid00307:25463] *** reported by process
> [140736284524545,140728898420736]
> > > [nid00307:25463] *** on win rdma window 3
> > > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will
> now abort,
> > > [nid00307:25463] ***and potentially your MPI job)
> > >
> > > Looking more into this issue, it seems like open-mpi has a restriction
> on the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
> > >
> > > To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
> > >
> > > Thanks
> > > Udayanga
> > > ___
> > > users mailing list
> > > users@lists.open-mpi.org
> > > https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Jeff Squyres (jsquyres) via users
You can set this MCA var on a site-wide basis in a file:

https://www.open-mpi.org/faq/?category=tuning#setting-mca-params



> On Jan 9, 2019, at 1:18 PM, Udayanga Wickramasinghe  wrote:
> 
> Thanks. Yes, I am aware of that however, I currently have a requirement to 
> increase the default. 
> 
> Best,
> Udayanga
> 
> On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users 
>  wrote:
> If you need to support more attachments you can set the value of that 
> variable either by setting:
> 
> Environment:
> 
> OMPI_MCA_osc_rdma_max_attach
> 
> 
> mpirun command line:
> 
> —mca osc_rdma_max_attach
> 
> 
> Keep in mind that each attachment may use an underlying hardware resource 
> that may be easy to exhaust (hence the low default limit). It is recommended 
> to keep the total number as small as possible.
> 
> -Nathan
> 
> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe  wrote:
> > 
> > Hi,
> > I am running into an issue in open-mpi where it crashes abruptly during 
> > MPI_WIN_ATTACH. 
> > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > [nid00307:25463] *** reported by process [140736284524545,140728898420736]
> > [nid00307:25463] *** on win rdma window 3
> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now 
> > abort,
> > [nid00307:25463] ***and potentially your MPI job)
> > 
> > Looking more into this issue, it seems like open-mpi has a restriction on 
> > the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't 
> > spec doesn't say a lot about this scenario --"The argument win must be a 
> > window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but 
> > nonoverlapping) memory regions may be attached to the same window")
> > 
> > To workaround this, I have temporarily modified the variable 
> > mca_osc_rdma_component.max_attach. Is there any way to configure this in 
> > open-mpi?
> > 
> > Thanks
> > Udayanga
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Udayanga Wickramasinghe
Thanks. Yes, I am aware of that however, I currently have a requirement to
increase the default.

Best,
Udayanga

On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:

> If you need to support more attachments you can set the value of that
> variable either by setting:
>
> Environment:
>
> OMPI_MCA_osc_rdma_max_attach
>
>
> mpirun command line:
>
> —mca osc_rdma_max_attach
>
>
> Keep in mind that each attachment may use an underlying hardware resource
> that may be easy to exhaust (hence the low default limit). It is
> recommended to keep the total number as small as possible.
>
> -Nathan
>
> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe 
> wrote:
> >
> > Hi,
> > I am running into an issue in open-mpi where it crashes abruptly during
> MPI_WIN_ATTACH.
> > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > [nid00307:25463] *** reported by process
> [140736284524545,140728898420736]
> > [nid00307:25463] *** on win rdma window 3
> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will
> now abort,
> > [nid00307:25463] ***and potentially your MPI job)
> >
> > Looking more into this issue, it seems like open-mpi has a restriction
> on the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
> >
> > To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
> >
> > Thanks
> > Udayanga
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Nathan Hjelm via users
If you need to support more attachments you can set the value of that variable 
either by setting:

Environment:

OMPI_MCA_osc_rdma_max_attach


mpirun command line:

—mca osc_rdma_max_attach


Keep in mind that each attachment may use an underlying hardware resource that 
may be easy to exhaust (hence the low default limit). It is recommended to keep 
the total number as small as possible.

-Nathan

> On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe  wrote:
> 
> Hi,
> I am running into an issue in open-mpi where it crashes abruptly during 
> MPI_WIN_ATTACH. 
> [nid00307:25463] *** An error occurred in MPI_Win_attach
> [nid00307:25463] *** reported by process [140736284524545,140728898420736]
> [nid00307:25463] *** on win rdma window 3
> [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now 
> abort,
> [nid00307:25463] ***and potentially your MPI job)
> 
> Looking more into this issue, it seems like open-mpi has a restriction on the 
> maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't spec 
> doesn't say a lot about this scenario --"The argument win must be a window 
> that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but nonoverlapping) 
> memory regions may be attached to the same window")
> 
> To workaround this, I have temporarily modified the variable 
> mca_osc_rdma_component.max_attach. Is there any way to configure this in 
> open-mpi?
> 
> Thanks
> Udayanga
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-08 Thread Udayanga Wickramasinghe
Sorry should be corrected as MPI3.0 spec [1]

[1] https://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf ; page=443

Best Regards,
Udayanga

On Tue, Jan 8, 2019 at 11:36 PM Udayanga Wickramasinghe 
wrote:

> Hi,
> I am running into an issue in open-mpi where it crashes abruptly
> during MPI_WIN_ATTACH.
>
> [nid00307:25463] *** An error occurred in MPI_Win_attach
>
> [nid00307:25463] *** reported by process [140736284524545,140728898420736]
>
> [nid00307:25463] *** on win rdma window 3
>
> [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
>
> [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
> abort,
>
> [nid00307:25463] ***and potentially your MPI job)
>
>
> Looking more into this issue, it seems like open-mpi has a restriction on
> the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
>
> To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
>
> Thanks
> Udayanga
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users