I actually have a use case where my library will attach many
non-overlapping vm segments on demand to a single dynamic OMPI_Win_t
object. With the current static limit, I would either have to increase it
optimistically before startup or maintain a pool of dynamic win objects.
However, other MPI imp
Jeff,
At first glance, a comment in the code suggests the rationale is to minimize
the number of allocations and hence the time spent registering the memory.
Cheers,
Gilles
Jeff Hammond wrote:
>Why is this allocated statically? I dont understand the difficulty of a
>dynamically allocates and
Why is this allocated statically? I dont understand the difficulty of a
dynamically allocates and thus unrestricted implementation. Is there some
performance advantage to a bounded static allocation? Or is it that you
use O(n) lookups and need to keep n small to avoid exposing that to users?
I ha
Thanks, I think that will be very useful.
Best,
Udayanga
On Wed, Jan 9, 2019 at 1:39 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> You can set this MCA var on a site-wide basis in a file:
>
> https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
>
>
>
You can set this MCA var on a site-wide basis in a file:
https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
> On Jan 9, 2019, at 1:18 PM, Udayanga Wickramasinghe wrote:
>
> Thanks. Yes, I am aware of that however, I currently have a requirement to
> increase the default.
>
Thanks. Yes, I am aware of that however, I currently have a requirement to
increase the default.
Best,
Udayanga
On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:
> If you need to support more attachments you can set the value of that
> variable either by se
If you need to support more attachments you can set the value of that variable
either by setting:
Environment:
OMPI_MCA_osc_rdma_max_attach
mpirun command line:
—mca osc_rdma_max_attach
Keep in mind that each attachment may use an underlying hardware resource that
may be easy to exhaust (h
Sorry should be corrected as MPI3.0 spec [1]
[1] https://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf ; page=443
Best Regards,
Udayanga
On Tue, Jan 8, 2019 at 11:36 PM Udayanga Wickramasinghe
wrote:
> Hi,
> I am running into an issue in open-mpi where it crashes abruptly
> during MPI_WIN_AT