I'm allocating some large matrices, from 10k squared elements up to 40k squared 
per node.
I'm also using membind to place pages of the matrix memory across numa nodes so 
that the matrix might be bound according to the kind of pattern at the end of 
this email - where each 1 or 0 corresponds to a 256x256 block of memory.

The way I'm doing this is by calling hwloc_set_area_membind_nodeset many 
thousands of times after allocation, and I've found that as the matrices get 
bigger, then after some N calls to area_membind then I get a failure and it 
returns -1 (errno does not seem to be set to either ENOSYS or EXDEV) - but 
strerror report "Cannot allocate memory".

Question 1 : by calling area_setmembind too many times, am I causing some 
resource usage in the memory tables that is being exhausted.

Question 2 : Is there a better way of achieving the result I'm looking for 
(such as a call to membind with a stride of some kind to say put N pages in a 
row on each domain in alternation).

Many thanks

JB


0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
0000000000000000111111111111111100000000
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
1111111111111111000000000000000011111111
... etc


_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to