Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
        CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+       odp_shm_capability_t capa;
+       odp_shm_t shm;
+       uint64_t size, align;
+       uint8_t *data;
+       uint64_t i;
+
+       memset(&capa, 0, sizeof(odp_shm_capability_t));
+       CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+       CU_ASSERT(capa.max_blocks > 0);
+
+       size  = capa.max_size;
+       align = capa.max_align;
+
+       /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+       if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+               size = MAX_MEMORY_USED;
+
+       if (capa.max_align == 0 || capa.max_align > MEGA)
+               align = MEGA;


Comment:
Again, what does the application gain from having a non-zero `max_align` 
returned if it's unable to make use of it?

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> The 0 value is intended to say that addressability / available memory is the 
> only limit, so again in this case the implementation should return 0, not 
> 4GB. If the implementation says 0 and the application tries to reserve 
> something huge and that fails that's OK. The application needs to check RCs 
> in any event. But what's the point of having a non-zero limit if there's no 
> reasonable expectation that it means anything? At that point it's useless and 
> might as well be ignored.


>> Petri Savolainen(psavol) wrote:
>> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...


>>> Petri Savolainen(psavol) wrote:
>>> Implementation may have a limitation of e.g. 4GB due to limit of using a 32 
>>> bit address space reservation, etc. It would be waste to reserve 4GB of 
>>> system memory for every ODP instance, as  implementation could not 
>>> guarantee 4GB otherwise, as other applications allocate memory as well. So, 
>>> init  phase there could be 4.2GB available, but by the time ODP application 
>>> starts calling shm_reserve() there would be less than 4GB left and some 
>>> reserves would fail.
>>> 
>>> So, implementation may have large upper limit, which is not related to 
>>> amount of available memory but e.g. due implementation of the address 
>>> mapping (number of bits, hugepages, etc).


>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>> Same story as for `capa.max_size`. I'd expect most implementations to 
>>>> return `capa.max_align` to be either 0 or some reasonable value like 4K or 
>>>> 1M. However, if they specify something else then they should be able to 
>>>> deliver that.


>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>> If the implementation doesn't have a specific predefined upper limit then 
>>>>> it should return `capa.max_size == 0`. If it says it has a non-zero upper 
>>>>> limit then if it's unable to provide that limit that's a failure. 
>>>>> Otherwise what's the point of having a specified limit?


>>>>>> Petri Savolainen(psavol) wrote:
>>>>>> I'll add comment about zero value. Although, I already changed 
>>>>>> documentation to require param_init() call and say that don't change 
>>>>>> values that you are not going to use (init sets it to zero).


>>>>>>> Petri Savolainen(psavol) wrote:
>>>>>>> OK


>>>>>>>> Petri Savolainen(psavol) wrote:
>>>>>>>> Very large align could result very large allocation and thus again 
>>>>>>>> system run out of memory (e.g. 1TB align => >1TB alloc).
>>>>>>>> 
>>>>>>>> OK. I'll change align max to be a power of two. 


>>>>>>>>> Petri Savolainen(psavol) wrote:
>>>>>>>>> Since actual amount of available memory typically depends on system 
>>>>>>>>> load. SHM implementation may not have a limit (max_size==0), or limit 
>>>>>>>>> may be due to address space (e.g. 40bit == 1TB). System might not 
>>>>>>>>> have always the max amount (e.g. 1TB) available. I limit validation 
>>>>>>>>> test to assume that at least 100MB should be always available.


>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>> @muvarov `odp_shm_capability()` already tells the application the 
>>>>>>>>>> largest contiguous size it can reserve (`max_size`) and the maximum 
>>>>>>>>>> number of reserves it can do (`max_blocks`). This is just hinting to 
>>>>>>>>>> the implementation the total size of all reserves the application 
>>>>>>>>>> will do.


>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>> An additional `printf()` giving a bit more detail (i.e., `i` value) 
>>>>>>>>>>> would be useful here.


>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 
>>>>>>>>>>>> 1024 x 1024 rather than 1000 x 1000? And if the implementation 
>>>>>>>>>>>> supports an even higher `max_align` why not test that as well?


>>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>>> Why do you want to limit the size in a test that named 
>>>>>>>>>>>>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have 
>>>>>>>>>>>>> to pick a specific target, but if it's non-zero why wouldn't you 
>>>>>>>>>>>>> want to try to reserve that much to see if the limit is true?


>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>> 0 - means not specified. And what about continuous of memory 
>>>>>>>>>>>>>> chunks? Requesting  one big continues shared memory chunk is not 
>>>>>>>>>>>>>> general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165661756
updated_at 2018-02-02 14:41:42

Reply via email to