Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-07 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

platform/linux-generic/_ishm.c
line 23
@@ -1436,15 +1436,23 @@ int _odp_ishm_cleanup_files(const char *dirpath)
return 0;
 }
 
-int _odp_ishm_init_global(void)
+int _odp_ishm_init_global(const odp_init_t *init)
 {
void *addr;
void *spce_addr;
int i;
uid_t uid;
char *hp_dir = odp_global_data.hugepage_info.default_huge_page_dir;
uint64_t align;
+   uint64_t max_memory = ODP_CONFIG_ISHM_VA_PREALLOC_SZ;
+   uint64_t internal   = ODP_CONFIG_ISHM_VA_PREALLOC_SZ / 8;


Comment:
if (init && init->shm.max_memory)
max_memory = init->shm.max_memory + internal;

line 1455 does not underflow even with application requested 
init->shm.max_memory value:
shm_max_size = init->shm.max_memory + internal - internal;

> muvarov wrote
> 'internal' has to be also adjusted. Or you can get overflow at line 1455.


>> Petri Savolainen(psavol) wrote:
>> If max size is 4GB, max align is 1GB. A resevation may consume e.g. 4.5GB 
>> memory. Again it depends on other applications and ODP instances if such 
>> amount of system memory is still free when application tries to reserve it. 
>> Maybe first couple instances succeeded, but then system did run out of 
>> memory.


>>> Petri Savolainen(psavol) wrote:
>>> In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
>>> implementation is limited e.g. by address space pre-reservation etc (e.g. 
>>> 32 bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 cannot 
>>> be used as capability. A 4 GB reservation succeeds as long as system has 
>>> memory free. When other applications or ODP instances have reserved all 
>>> 32GB memory, yet another 4GB reservation will fail.
>>> 
>>> So, 0 says that there's no other limit than amount of currently free 
>>> memory. E.g. odp-linux implementation lied by returning 0 and at the same 
>>> time limiting max reservation to 512MB (due to fixed size address space 
>>> allocation). 


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Again, what does the application gain from having a non-zero `max_align` 
 returned if it's unable to make use of it?


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> The 0 value is intended to say that addressability / available memory is 
> the only limit, so again in this case the implementation should return 0, 
> not 4GB. If the implementation says 0 and the application tries to 
> reserve something huge and that fails that's OK. The application needs to 
> check RCs in any event. But what's the point of having a non-zero limit 
> if there's no reasonable expectation that it means anything? At that 
> point it's useless and might as well be ignored.


>> Petri Savolainen(psavol) wrote:
>> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, 
>> ...


>>> Petri Savolainen(psavol) wrote:
>>> Implementation may have a limitation of e.g. 4GB due to limit of using 
>>> a 32 bit address space reservation, etc. It would be waste to reserve 
>>> 4GB of system memory for every ODP instance, as  implementation could 
>>> not guarantee 4GB otherwise, as other applications allocate memory as 
>>> well. So, init  phase there could be 4.2GB available, but by the time 
>>> ODP application starts calling shm_reserve() there would be less than 
>>> 4GB left and some reserves would fail.
>>> 
>>> So, implementation may have large upper limit, which is not related to 
>>> amount of available memory but e.g. due implementation of the address 
>>> mapping (number of bits, hugepages, etc).


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Same story as for `capa.max_size`. I'd expect most implementations to 
 return `capa.max_align` to be either 0 or some reasonable value like 
 4K or 1M. However, if they specify something else then they should be 
 able to deliver that.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> If the implementation doesn't have a specific predefined upper limit 
> then it should return `capa.max_size == 0`. If it says it has a 
> non-zero upper limit then if it's unable to provide that limit that's 
> a failure. Otherwise what's the point of having a specified limit?


>> Petri Savolainen(psavol) wrote:
>> I'll add comment about zero value. Although, I already changed 
>> documentation to require param_init() call and say that don't change 
>> values that you are not going to use (init sets it to zero).


>>> Petri Savolainen(psavol) wrote:
>>> OK


 Petri Savolainen(psavol) wrote:
 Very large align could result very large allocation and thus again 
 system run out of memory (e.g. 1TB align => >1TB alloc).
 
 OK. I'll change align max to be a power of two

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-07 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

platform/linux-generic/_ishm.c
line 23
@@ -1436,15 +1436,23 @@ int _odp_ishm_cleanup_files(const char *dirpath)
return 0;
 }
 
-int _odp_ishm_init_global(void)
+int _odp_ishm_init_global(const odp_init_t *init)
 {
void *addr;
void *spce_addr;
int i;
uid_t uid;
char *hp_dir = odp_global_data.hugepage_info.default_huge_page_dir;
uint64_t align;
+   uint64_t max_memory = ODP_CONFIG_ISHM_VA_PREALLOC_SZ;
+   uint64_t internal   = ODP_CONFIG_ISHM_VA_PREALLOC_SZ / 8;


Comment:
Internal is for ODP implementation internal allocations. E.g. if application 
passes 1MB as init->shm.max_memory, we need to reserve much more as ODP 
internally needs SHM for various tables, etc. Application's max_memory just 
says how much it will itself request.

> muvarov wrote
> yes agree. But do we need to link to ODP_CONFIG_ISHM_VA_PREALLOC_SZ is size 
> was passed to init function?


>> Petri Savolainen(psavol) wrote:
>> if (init && init->shm.max_memory)
>>  max_memory = init->shm.max_memory + internal;
>> 
>> line 1455 does not underflow even with application requested 
>> init->shm.max_memory value:
>> shm_max_size = init->shm.max_memory + internal - internal;


>>> muvarov wrote
>>> 'internal' has to be also adjusted. Or you can get overflow at line 1455.


 Petri Savolainen(psavol) wrote:
 If max size is 4GB, max align is 1GB. A resevation may consume e.g. 4.5GB 
 memory. Again it depends on other applications and ODP instances if such 
 amount of system memory is still free when application tries to reserve 
 it. Maybe first couple instances succeeded, but then system did run out of 
 memory.


> Petri Savolainen(psavol) wrote:
> In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
> implementation is limited e.g. by address space pre-reservation etc (e.g. 
> 32 bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 
> cannot be used as capability. A 4 GB reservation succeeds as long as 
> system has memory free. When other applications or ODP instances have 
> reserved all 32GB memory, yet another 4GB reservation will fail.
> 
> So, 0 says that there's no other limit than amount of currently free 
> memory. E.g. odp-linux implementation lied by returning 0 and at the same 
> time limiting max reservation to 512MB (due to fixed size address space 
> allocation). 


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Again, what does the application gain from having a non-zero `max_align` 
>> returned if it's unable to make use of it?


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> The 0 value is intended to say that addressability / available memory 
>>> is the only limit, so again in this case the implementation should 
>>> return 0, not 4GB. If the implementation says 0 and the application 
>>> tries to reserve something huge and that fails that's OK. The 
>>> application needs to check RCs in any event. But what's the point of 
>>> having a non-zero limit if there's no reasonable expectation that it 
>>> means anything? At that point it's useless and might as well be ignored.


 Petri Savolainen(psavol) wrote:
 Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 
 16GB, ...


> Petri Savolainen(psavol) wrote:
> Implementation may have a limitation of e.g. 4GB due to limit of 
> using a 32 bit address space reservation, etc. It would be waste to 
> reserve 4GB of system memory for every ODP instance, as  
> implementation could not guarantee 4GB otherwise, as other 
> applications allocate memory as well. So, init  phase there could be 
> 4.2GB available, but by the time ODP application starts calling 
> shm_reserve() there would be less than 4GB left and some reserves 
> would fail.
> 
> So, implementation may have large upper limit, which is not related 
> to amount of available memory but e.g. due implementation of the 
> address mapping (number of bits, hugepages, etc).


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Same story as for `capa.max_size`. I'd expect most implementations 
>> to return `capa.max_align` to be either 0 or some reasonable value 
>> like 4K or 1M. However, if they specify something else then they 
>> should be able to deliver that.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> If the implementation doesn't have a specific predefined upper 
>>> limit then it should return `capa.max_size == 0`. If it says it has 
>>> a non-zero upper limit then if it's unable to provide that limit 
>>> that's a failure. Otherwise what's the point of having a specified 
>>> limit?


 Petri Sa

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-07 Thread Github ODP bot
muvarov replied on github web page:

platform/linux-generic/_ishm.c
line 23
@@ -1436,15 +1436,23 @@ int _odp_ishm_cleanup_files(const char *dirpath)
return 0;
 }
 
-int _odp_ishm_init_global(void)
+int _odp_ishm_init_global(const odp_init_t *init)
 {
void *addr;
void *spce_addr;
int i;
uid_t uid;
char *hp_dir = odp_global_data.hugepage_info.default_huge_page_dir;
uint64_t align;
+   uint64_t max_memory = ODP_CONFIG_ISHM_VA_PREALLOC_SZ;
+   uint64_t internal   = ODP_CONFIG_ISHM_VA_PREALLOC_SZ / 8;


Comment:
yes agree. But do we need to link to ODP_CONFIG_ISHM_VA_PREALLOC_SZ is size was 
passed to init function?

> Petri Savolainen(psavol) wrote:
> if (init && init->shm.max_memory)
>   max_memory = init->shm.max_memory + internal;
> 
> line 1455 does not underflow even with application requested 
> init->shm.max_memory value:
> shm_max_size = init->shm.max_memory + internal - internal;


>> muvarov wrote
>> 'internal' has to be also adjusted. Or you can get overflow at line 1455.


>>> Petri Savolainen(psavol) wrote:
>>> If max size is 4GB, max align is 1GB. A resevation may consume e.g. 4.5GB 
>>> memory. Again it depends on other applications and ODP instances if such 
>>> amount of system memory is still free when application tries to reserve it. 
>>> Maybe first couple instances succeeded, but then system did run out of 
>>> memory.


 Petri Savolainen(psavol) wrote:
 In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
 implementation is limited e.g. by address space pre-reservation etc (e.g. 
 32 bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 
 cannot be used as capability. A 4 GB reservation succeeds as long as 
 system has memory free. When other applications or ODP instances have 
 reserved all 32GB memory, yet another 4GB reservation will fail.
 
 So, 0 says that there's no other limit than amount of currently free 
 memory. E.g. odp-linux implementation lied by returning 0 and at the same 
 time limiting max reservation to 512MB (due to fixed size address space 
 allocation). 


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Again, what does the application gain from having a non-zero `max_align` 
> returned if it's unable to make use of it?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> The 0 value is intended to say that addressability / available memory is 
>> the only limit, so again in this case the implementation should return 
>> 0, not 4GB. If the implementation says 0 and the application tries to 
>> reserve something huge and that fails that's OK. The application needs 
>> to check RCs in any event. But what's the point of having a non-zero 
>> limit if there's no reasonable expectation that it means anything? At 
>> that point it's useless and might as well be ignored.


>>> Petri Savolainen(psavol) wrote:
>>> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, 
>>> ...


 Petri Savolainen(psavol) wrote:
 Implementation may have a limitation of e.g. 4GB due to limit of using 
 a 32 bit address space reservation, etc. It would be waste to reserve 
 4GB of system memory for every ODP instance, as  implementation could 
 not guarantee 4GB otherwise, as other applications allocate memory as 
 well. So, init  phase there could be 4.2GB available, but by the time 
 ODP application starts calling shm_reserve() there would be less than 
 4GB left and some reserves would fail.
 
 So, implementation may have large upper limit, which is not related to 
 amount of available memory but e.g. due implementation of the address 
 mapping (number of bits, hugepages, etc).


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Same story as for `capa.max_size`. I'd expect most implementations to 
> return `capa.max_align` to be either 0 or some reasonable value like 
> 4K or 1M. However, if they specify something else then they should be 
> able to deliver that.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> If the implementation doesn't have a specific predefined upper limit 
>> then it should return `capa.max_size == 0`. If it says it has a 
>> non-zero upper limit then if it's unable to provide that limit 
>> that's a failure. Otherwise what's the point of having a specified 
>> limit?


>>> Petri Savolainen(psavol) wrote:
>>> I'll add comment about zero value. Although, I already changed 
>>> documentation to require param_init() call and say that don't 
>>> change values that you are not going to use (init sets it to zero).


 Petri Savolainen(psavol) wrote:
 OK


> Petri Savolainen(psavol) wrote:
> 

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-06 Thread Github ODP bot
muvarov replied on github web page:

platform/linux-generic/_ishm.c
line 23
@@ -1436,15 +1436,23 @@ int _odp_ishm_cleanup_files(const char *dirpath)
return 0;
 }
 
-int _odp_ishm_init_global(void)
+int _odp_ishm_init_global(const odp_init_t *init)
 {
void *addr;
void *spce_addr;
int i;
uid_t uid;
char *hp_dir = odp_global_data.hugepage_info.default_huge_page_dir;
uint64_t align;
+   uint64_t max_memory = ODP_CONFIG_ISHM_VA_PREALLOC_SZ;
+   uint64_t internal   = ODP_CONFIG_ISHM_VA_PREALLOC_SZ / 8;


Comment:
'internal' has to be also adjusted. Or you can get overflow at line 1455.

> Petri Savolainen(psavol) wrote:
> If max size is 4GB, max align is 1GB. A resevation may consume e.g. 4.5GB 
> memory. Again it depends on other applications and ODP instances if such 
> amount of system memory is still free when application tries to reserve it. 
> Maybe first couple instances succeeded, but then system did run out of memory.


>> Petri Savolainen(psavol) wrote:
>> In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
>> implementation is limited e.g. by address space pre-reservation etc (e.g. 32 
>> bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 cannot be 
>> used as capability. A 4 GB reservation succeeds as long as system has memory 
>> free. When other applications or ODP instances have reserved all 32GB 
>> memory, yet another 4GB reservation will fail.
>> 
>> So, 0 says that there's no other limit than amount of currently free memory. 
>> E.g. odp-linux implementation lied by returning 0 and at the same time 
>> limiting max reservation to 512MB (due to fixed size address space 
>> allocation). 


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Again, what does the application gain from having a non-zero `max_align` 
>>> returned if it's unable to make use of it?


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 The 0 value is intended to say that addressability / available memory is 
 the only limit, so again in this case the implementation should return 0, 
 not 4GB. If the implementation says 0 and the application tries to reserve 
 something huge and that fails that's OK. The application needs to check 
 RCs in any event. But what's the point of having a non-zero limit if 
 there's no reasonable expectation that it means anything? At that point 
 it's useless and might as well be ignored.


> Petri Savolainen(psavol) wrote:
> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, 
> ...


>> Petri Savolainen(psavol) wrote:
>> Implementation may have a limitation of e.g. 4GB due to limit of using a 
>> 32 bit address space reservation, etc. It would be waste to reserve 4GB 
>> of system memory for every ODP instance, as  implementation could not 
>> guarantee 4GB otherwise, as other applications allocate memory as well. 
>> So, init  phase there could be 4.2GB available, but by the time ODP 
>> application starts calling shm_reserve() there would be less than 4GB 
>> left and some reserves would fail.
>> 
>> So, implementation may have large upper limit, which is not related to 
>> amount of available memory but e.g. due implementation of the address 
>> mapping (number of bits, hugepages, etc).


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Same story as for `capa.max_size`. I'd expect most implementations to 
>>> return `capa.max_align` to be either 0 or some reasonable value like 4K 
>>> or 1M. However, if they specify something else then they should be able 
>>> to deliver that.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 If the implementation doesn't have a specific predefined upper limit 
 then it should return `capa.max_size == 0`. If it says it has a 
 non-zero upper limit then if it's unable to provide that limit that's 
 a failure. Otherwise what's the point of having a specified limit?


> Petri Savolainen(psavol) wrote:
> I'll add comment about zero value. Although, I already changed 
> documentation to require param_init() call and say that don't change 
> values that you are not going to use (init sets it to zero).


>> Petri Savolainen(psavol) wrote:
>> OK


>>> Petri Savolainen(psavol) wrote:
>>> Very large align could result very large allocation and thus again 
>>> system run out of memory (e.g. 1TB align => >1TB alloc).
>>> 
>>> OK. I'll change align max to be a power of two. 


 Petri Savolainen(psavol) wrote:
 Since actual amount of available memory typically depends on 
 system load. SHM implementation may not have a limit 
 (max_size==0), or limit may be due to address space (e.g. 40bit == 
 1TB). System might not have always the max amount (e.g. 1TB) 
>>>

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-05 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
If max size is 4GB, max align is 1GB. A resevation may consume e.g. 4.5GB 
memory. Again it depends on other applications and ODP instances if such amount 
of system memory is still free when application tries to reserve it. Maybe 
first couple instances succeeded, but then system did run out of memory.

> Petri Savolainen(psavol) wrote:
> In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
> implementation is limited e.g. by address space pre-reservation etc (e.g. 32 
> bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 cannot be 
> used as capability. A 4 GB reservation succeeds as long as system has memory 
> free. When other applications or ODP instances have reserved all 32GB memory, 
> yet another 4GB reservation will fail.
> 
> So, 0 says that there's no other limit than amount of currently free memory. 
> E.g. odp-linux implementation lied by returning 0 and at the same time 
> limiting max reservation to 512MB (due to fixed size address space 
> allocation). 


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Again, what does the application gain from having a non-zero `max_align` 
>> returned if it's unable to make use of it?


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> The 0 value is intended to say that addressability / available memory is 
>>> the only limit, so again in this case the implementation should return 0, 
>>> not 4GB. If the implementation says 0 and the application tries to reserve 
>>> something huge and that fails that's OK. The application needs to check RCs 
>>> in any event. But what's the point of having a non-zero limit if there's no 
>>> reasonable expectation that it means anything? At that point it's useless 
>>> and might as well be ignored.


 Petri Savolainen(psavol) wrote:
 Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...


> Petri Savolainen(psavol) wrote:
> Implementation may have a limitation of e.g. 4GB due to limit of using a 
> 32 bit address space reservation, etc. It would be waste to reserve 4GB 
> of system memory for every ODP instance, as  implementation could not 
> guarantee 4GB otherwise, as other applications allocate memory as well. 
> So, init  phase there could be 4.2GB available, but by the time ODP 
> application starts calling shm_reserve() there would be less than 4GB 
> left and some reserves would fail.
> 
> So, implementation may have large upper limit, which is not related to 
> amount of available memory but e.g. due implementation of the address 
> mapping (number of bits, hugepages, etc).


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Same story as for `capa.max_size`. I'd expect most implementations to 
>> return `capa.max_align` to be either 0 or some reasonable value like 4K 
>> or 1M. However, if they specify something else then they should be able 
>> to deliver that.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> If the implementation doesn't have a specific predefined upper limit 
>>> then it should return `capa.max_size == 0`. If it says it has a 
>>> non-zero upper limit then if it's unable to provide that limit that's a 
>>> failure. Otherwise what's the point of having a specified limit?


 Petri Savolainen(psavol) wrote:
 I'll add comment about zero value. Although, I already changed 
 documentation to require param_init() call and say that don't change 
 values that you are not going to use (init sets it to zero).


> Petri Savolainen(psavol) wrote:
> OK


>> Petri Savolainen(psavol) wrote:
>> Very large align could result very large allocation and thus again 
>> system run out of memory (e.g. 1TB align => >1TB alloc).
>> 
>> OK. I'll change align max to be a power of two. 


>>> Petri Savolainen(psavol) wrote:
>>> Since actual amount of available memory typically depends on system 
>>> load. SHM implementation may not have a limit (max_size==0

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-05 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
In the example, system is 64 bit, has e.g. 32 GB of memory, ODP SHM 
implementation is limited e.g. by address space pre-reservation etc (e.g. 32 
bits / 4GB). So, ODP implementation limits max SHM size to 4GB, 0 cannot be 
used as capability. A 4 GB reservation succeeds as long as system has memory 
free. When other applications or ODP instances have reserved all 32GB memory, 
yet another 4GB reservation will fail.

So, 0 says that there's no other limit than amount of currently free memory. 
E.g. odp-linux implementation lied by returning 0 and at the same time limiting 
max reservation to 512MB (due to fixed size address space allocation). 

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Again, what does the application gain from having a non-zero `max_align` 
> returned if it's unable to make use of it?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> The 0 value is intended to say that addressability / available memory is the 
>> only limit, so again in this case the implementation should return 0, not 
>> 4GB. If the implementation says 0 and the application tries to reserve 
>> something huge and that fails that's OK. The application needs to check RCs 
>> in any event. But what's the point of having a non-zero limit if there's no 
>> reasonable expectation that it means anything? At that point it's useless 
>> and might as well be ignored.


>>> Petri Savolainen(psavol) wrote:
>>> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...


 Petri Savolainen(psavol) wrote:
 Implementation may have a limitation of e.g. 4GB due to limit of using a 
 32 bit address space reservation, etc. It would be waste to reserve 4GB of 
 system memory for every ODP instance, as  implementation could not 
 guarantee 4GB otherwise, as other applications allocate memory as well. 
 So, init  phase there could be 4.2GB available, but by the time ODP 
 application starts calling shm_reserve() there would be less than 4GB left 
 and some reserves would fail.
 
 So, implementation may have large upper limit, which is not related to 
 amount of available memory but e.g. due implementation of the address 
 mapping (number of bits, hugepages, etc).


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Same story as for `capa.max_size`. I'd expect most implementations to 
> return `capa.max_align` to be either 0 or some reasonable value like 4K 
> or 1M. However, if they specify something else then they should be able 
> to deliver that.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> If the implementation doesn't have a specific predefined upper limit 
>> then it should return `capa.max_size == 0`. If it says it has a non-zero 
>> upper limit then if it's unable to provide that limit that's a failure. 
>> Otherwise what's the point of having a specified limit?


>>> Petri Savolainen(psavol) wrote:
>>> I'll add comment about zero value. Although, I already changed 
>>> documentation to require param_init() call and say that don't change 
>>> values that you are not going to use (init sets it to zero).


 Petri Savolainen(psavol) wrote:
 OK


> Petri Savolainen(psavol) wrote:
> Very large align could result very large allocation and thus again 
> system run out of memory (e.g. 1TB align => >1TB alloc).
> 
> OK. I'll change align max to be a power of two. 


>> Petri Savolainen(psavol) wrote:
>> Since actual amount of available memory typically depends on system 
>> load. SHM implementation may not have a limit (max_size==0), or 
>> limit may be due to address space (e.g. 40bit == 1TB). System might 
>> not have always the max amount (e.g. 1TB) available. I limit 
>> validation test to assume that at least 100MB should be always 
>> available.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> @muvarov `odp_shm_capability()` already tells the application the 
>>> largest contiguous size it can reserve (`max_size`) and the maximum 
>>> number of reser

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
Again, what does the application gain from having a non-zero `max_align` 
returned if it's unable to make use of it?

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> The 0 value is intended to say that addressability / available memory is the 
> only limit, so again in this case the implementation should return 0, not 
> 4GB. If the implementation says 0 and the application tries to reserve 
> something huge and that fails that's OK. The application needs to check RCs 
> in any event. But what's the point of having a non-zero limit if there's no 
> reasonable expectation that it means anything? At that point it's useless and 
> might as well be ignored.


>> Petri Savolainen(psavol) wrote:
>> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...


>>> Petri Savolainen(psavol) wrote:
>>> Implementation may have a limitation of e.g. 4GB due to limit of using a 32 
>>> bit address space reservation, etc. It would be waste to reserve 4GB of 
>>> system memory for every ODP instance, as  implementation could not 
>>> guarantee 4GB otherwise, as other applications allocate memory as well. So, 
>>> init  phase there could be 4.2GB available, but by the time ODP application 
>>> starts calling shm_reserve() there would be less than 4GB left and some 
>>> reserves would fail.
>>> 
>>> So, implementation may have large upper limit, which is not related to 
>>> amount of available memory but e.g. due implementation of the address 
>>> mapping (number of bits, hugepages, etc).


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Same story as for `capa.max_size`. I'd expect most implementations to 
 return `capa.max_align` to be either 0 or some reasonable value like 4K or 
 1M. However, if they specify something else then they should be able to 
 deliver that.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> If the implementation doesn't have a specific predefined upper limit then 
> it should return `capa.max_size == 0`. If it says it has a non-zero upper 
> limit then if it's unable to provide that limit that's a failure. 
> Otherwise what's the point of having a specified limit?


>> Petri Savolainen(psavol) wrote:
>> I'll add comment about zero value. Although, I already changed 
>> documentation to require param_init() call and say that don't change 
>> values that you are not going to use (init sets it to zero).


>>> Petri Savolainen(psavol) wrote:
>>> OK


 Petri Savolainen(psavol) wrote:
 Very large align could result very large allocation and thus again 
 system run out of memory (e.g. 1TB align => >1TB alloc).
 
 OK. I'll change align max to be a power of two. 


> Petri Savolainen(psavol) wrote:
> Since actual amount of available memory typically depends on system 
> load. SHM implementation may not have a limit (max_size==0), or limit 
> may be due to address space (e.g. 40bit == 1TB). System might not 
> have always the max amount (e.g. 1TB) available. I limit validation 
> test to assume that at least 100MB should be always available.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> @muvarov `odp_shm_capability()` already tells the application the 
>> largest contiguous size it can reserve (`max_size`) and the maximum 
>> number of reserves it can do (`max_blocks`). This is just hinting to 
>> the implementation the total size of all reserves the application 
>> will do.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> An additional `printf()` giving a bit more detail (i.e., `i` value) 
>>> would be useful here.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 
 1024 x 1024 rather than 1000 x 1000? And if the implementation 
 supports an even higher `max_align` why not test that as well?


> Bill Fischofer(Bill-Fischofer-Lina

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
The 0 value is intended to say that addressability / available memory is the 
only limit, so again in this case the implementation should return 0, not 4GB. 
If the implementation says 0 and the application tries to reserve something 
huge and that fails that's OK. The application needs to check RCs in any event. 
But what's the point of having a non-zero limit if there's no reasonable 
expectation that it means anything? At that point it's useless and might as 
well be ignored.

> Petri Savolainen(psavol) wrote:
> Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...


>> Petri Savolainen(psavol) wrote:
>> Implementation may have a limitation of e.g. 4GB due to limit of using a 32 
>> bit address space reservation, etc. It would be waste to reserve 4GB of 
>> system memory for every ODP instance, as  implementation could not guarantee 
>> 4GB otherwise, as other applications allocate memory as well. So, init  
>> phase there could be 4.2GB available, but by the time ODP application starts 
>> calling shm_reserve() there would be less than 4GB left and some reserves 
>> would fail.
>> 
>> So, implementation may have large upper limit, which is not related to 
>> amount of available memory but e.g. due implementation of the address 
>> mapping (number of bits, hugepages, etc).


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Same story as for `capa.max_size`. I'd expect most implementations to 
>>> return `capa.max_align` to be either 0 or some reasonable value like 4K or 
>>> 1M. However, if they specify something else then they should be able to 
>>> deliver that.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 If the implementation doesn't have a specific predefined upper limit then 
 it should return `capa.max_size == 0`. If it says it has a non-zero upper 
 limit then if it's unable to provide that limit that's a failure. 
 Otherwise what's the point of having a specified limit?


> Petri Savolainen(psavol) wrote:
> I'll add comment about zero value. Although, I already changed 
> documentation to require param_init() call and say that don't change 
> values that you are not going to use (init sets it to zero).


>> Petri Savolainen(psavol) wrote:
>> OK


>>> Petri Savolainen(psavol) wrote:
>>> Very large align could result very large allocation and thus again 
>>> system run out of memory (e.g. 1TB align => >1TB alloc).
>>> 
>>> OK. I'll change align max to be a power of two. 


 Petri Savolainen(psavol) wrote:
 Since actual amount of available memory typically depends on system 
 load. SHM implementation may not have a limit (max_size==0), or limit 
 may be due to address space (e.g. 40bit == 1TB). System might not have 
 always the max amount (e.g. 1TB) available. I limit validation test to 
 assume that at least 100MB should be always available.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> @muvarov `odp_shm_capability()` already tells the application the 
> largest contiguous size it can reserve (`max_size`) and the maximum 
> number of reserves it can do (`max_blocks`). This is just hinting to 
> the implementation the total size of all reserves the application 
> will do.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> An additional `printf()` giving a bit more detail (i.e., `i` value) 
>> would be useful here.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 
>>> x 1024 rather than 1000 x 1000? And if the implementation supports 
>>> an even higher `max_align` why not test that as well?


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Why do you want to limit the size in a test that named 
 `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have 
 to pick a specific target, but if it's non-zero why wouldn't you 
 want to try to reserve that much to see if the limit is true?


>

Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
Maybe with 1GB hugepages the max align is 1GB, and 16GB hugepages 16GB, ...

> Petri Savolainen(psavol) wrote:
> Implementation may have a limitation of e.g. 4GB due to limit of using a 32 
> bit address space reservation, etc. It would be waste to reserve 4GB of 
> system memory for every ODP instance, as  implementation could not guarantee 
> 4GB otherwise, as other applications allocate memory as well. So, init  phase 
> there could be 4.2GB available, but by the time ODP application starts 
> calling shm_reserve() there would be less than 4GB left and some reserves 
> would fail.
> 
> So, implementation may have large upper limit, which is not related to amount 
> of available memory but e.g. due implementation of the address mapping 
> (number of bits, hugepages, etc).


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Same story as for `capa.max_size`. I'd expect most implementations to return 
>> `capa.max_align` to be either 0 or some reasonable value like 4K or 1M. 
>> However, if they specify something else then they should be able to deliver 
>> that.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> If the implementation doesn't have a specific predefined upper limit then 
>>> it should return `capa.max_size == 0`. If it says it has a non-zero upper 
>>> limit then if it's unable to provide that limit that's a failure. Otherwise 
>>> what's the point of having a specified limit?


 Petri Savolainen(psavol) wrote:
 I'll add comment about zero value. Although, I already changed 
 documentation to require param_init() call and say that don't change 
 values that you are not going to use (init sets it to zero).


> Petri Savolainen(psavol) wrote:
> OK


>> Petri Savolainen(psavol) wrote:
>> Very large align could result very large allocation and thus again 
>> system run out of memory (e.g. 1TB align => >1TB alloc).
>> 
>> OK. I'll change align max to be a power of two. 


>>> Petri Savolainen(psavol) wrote:
>>> Since actual amount of available memory typically depends on system 
>>> load. SHM implementation may not have a limit (max_size==0), or limit 
>>> may be due to address space (e.g. 40bit == 1TB). System might not have 
>>> always the max amount (e.g. 1TB) available. I limit validation test to 
>>> assume that at least 100MB should be always available.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 @muvarov `odp_shm_capability()` already tells the application the 
 largest contiguous size it can reserve (`max_size`) and the maximum 
 number of reserves it can do (`max_blocks`). This is just hinting to 
 the implementation the total size of all reserves the application will 
 do.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> An additional `printf()` giving a bit more detail (i.e., `i` value) 
> would be useful here.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 
>> x 1024 rather than 1000 x 1000? And if the implementation supports 
>> an even higher `max_align` why not test that as well?


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Why do you want to limit the size in a test that named 
>>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have 
>>> to pick a specific target, but if it's non-zero why wouldn't you 
>>> want to try to reserve that much to see if the limit is true?


 muvarov wrote
 0 - means not specified. And what about continuous of memory 
 chunks? Requesting  one big continues shared memory chunk is not 
 general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165648771
updated_at 2018-02-02 13:50:47


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
Same story as for `capa.max_size`. I'd expect most implementations to return 
`capa.max_align` to be either 0 or some reasonable value like 4K or 1M. 
However, if they specify something else then they should be able to deliver 
that.

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> If the implementation doesn't have a specific predefined upper limit then it 
> should return `capa.max_size == 0`. If it says it has a non-zero upper limit 
> then if it's unable to provide that limit that's a failure. Otherwise what's 
> the point of having a specified limit?


>> Petri Savolainen(psavol) wrote:
>> I'll add comment about zero value. Although, I already changed documentation 
>> to require param_init() call and say that don't change values that you are 
>> not going to use (init sets it to zero).


>>> Petri Savolainen(psavol) wrote:
>>> OK


 Petri Savolainen(psavol) wrote:
 Very large align could result very large allocation and thus again system 
 run out of memory (e.g. 1TB align => >1TB alloc).
 
 OK. I'll change align max to be a power of two. 


> Petri Savolainen(psavol) wrote:
> Since actual amount of available memory typically depends on system load. 
> SHM implementation may not have a limit (max_size==0), or limit may be 
> due to address space (e.g. 40bit == 1TB). System might not have always 
> the max amount (e.g. 1TB) available. I limit validation test to assume 
> that at least 100MB should be always available.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> @muvarov `odp_shm_capability()` already tells the application the 
>> largest contiguous size it can reserve (`max_size`) and the maximum 
>> number of reserves it can do (`max_blocks`). This is just hinting to the 
>> implementation the total size of all reserves the application will do.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> An additional `printf()` giving a bit more detail (i.e., `i` value) 
>>> would be useful here.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 
 1024 rather than 1000 x 1000? And if the implementation supports an 
 even higher `max_align` why not test that as well?


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Why do you want to limit the size in a test that named 
> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to 
> pick a specific target, but if it's non-zero why wouldn't you want to 
> try to reserve that much to see if the limit is true?


>> muvarov wrote
>> 0 - means not specified. And what about continuous of memory chunks? 
>> Requesting  one big continues shared memory chunk is not general 
>> solution.


https://github.com/Linaro/odp/pull/446#discussion_r165635663
updated_at 2018-02-02 12:46:33


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
If the implementation doesn't have a specific predefined upper limit then it 
should return `capa.max_size == 0`. If it says it has a non-zero upper limit 
then if it's unable to provide that limit that's a failure. Otherwise what's 
the point of having a specified limit?

> Petri Savolainen(psavol) wrote:
> I'll add comment about zero value. Although, I already changed documentation 
> to require param_init() call and say that don't change values that you are 
> not going to use (init sets it to zero).


>> Petri Savolainen(psavol) wrote:
>> OK


>>> Petri Savolainen(psavol) wrote:
>>> Very large align could result very large allocation and thus again system 
>>> run out of memory (e.g. 1TB align => >1TB alloc).
>>> 
>>> OK. I'll change align max to be a power of two. 


 Petri Savolainen(psavol) wrote:
 Since actual amount of available memory typically depends on system load. 
 SHM implementation may not have a limit (max_size==0), or limit may be due 
 to address space (e.g. 40bit == 1TB). System might not have always the max 
 amount (e.g. 1TB) available. I limit validation test to assume that at 
 least 100MB should be always available.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> @muvarov `odp_shm_capability()` already tells the application the largest 
> contiguous size it can reserve (`max_size`) and the maximum number of 
> reserves it can do (`max_blocks`). This is just hinting to the 
> implementation the total size of all reserves the application will do.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> An additional `printf()` giving a bit more detail (i.e., `i` value) 
>> would be useful here.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 
>>> 1024 rather than 1000 x 1000? And if the implementation supports an 
>>> even higher `max_align` why not test that as well?


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Why do you want to limit the size in a test that named 
 `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to 
 pick a specific target, but if it's non-zero why wouldn't you want to 
 try to reserve that much to see if the limit is true?


> muvarov wrote
> 0 - means not specified. And what about continuous of memory chunks? 
> Requesting  one big continues shared memory chunk is not general 
> solution.


https://github.com/Linaro/odp/pull/446#discussion_r165635061
updated_at 2018-02-02 12:43:25


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
Implementation may have a limitation of e.g. 4GB due to limit of using a 32 bit 
address space reservation, etc. It would be waste to reserve 4GB of system 
memory for every ODP instance, as  implementation could not guarantee 4GB 
otherwise, as other applications allocate memory as well. So, init  phase there 
could be 4.2GB available, but by the time ODP application starts calling 
shm_reserve() there would be less than 4GB left and some reserves would fail.

So, implementation may have large upper limit, which is not related to amount 
of available memory but e.g. due implementation of the address mapping (number 
of bits, hugepages, etc).

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Same story as for `capa.max_size`. I'd expect most implementations to return 
> `capa.max_align` to be either 0 or some reasonable value like 4K or 1M. 
> However, if they specify something else then they should be able to deliver 
> that.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> If the implementation doesn't have a specific predefined upper limit then it 
>> should return `capa.max_size == 0`. If it says it has a non-zero upper limit 
>> then if it's unable to provide that limit that's a failure. Otherwise what's 
>> the point of having a specified limit?


>>> Petri Savolainen(psavol) wrote:
>>> I'll add comment about zero value. Although, I already changed 
>>> documentation to require param_init() call and say that don't change values 
>>> that you are not going to use (init sets it to zero).


 Petri Savolainen(psavol) wrote:
 OK


> Petri Savolainen(psavol) wrote:
> Very large align could result very large allocation and thus again system 
> run out of memory (e.g. 1TB align => >1TB alloc).
> 
> OK. I'll change align max to be a power of two. 


>> Petri Savolainen(psavol) wrote:
>> Since actual amount of available memory typically depends on system 
>> load. SHM implementation may not have a limit (max_size==0), or limit 
>> may be due to address space (e.g. 40bit == 1TB). System might not have 
>> always the max amount (e.g. 1TB) available. I limit validation test to 
>> assume that at least 100MB should be always available.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> @muvarov `odp_shm_capability()` already tells the application the 
>>> largest contiguous size it can reserve (`max_size`) and the maximum 
>>> number of reserves it can do (`max_blocks`). This is just hinting to 
>>> the implementation the total size of all reserves the application will 
>>> do.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 An additional `printf()` giving a bit more detail (i.e., `i` value) 
 would be useful here.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 
> 1024 rather than 1000 x 1000? And if the implementation supports an 
> even higher `max_align` why not test that as well?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Why do you want to limit the size in a test that named 
>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to 
>> pick a specific target, but if it's non-zero why wouldn't you want 
>> to try to reserve that much to see if the limit is true?


>>> muvarov wrote
>>> 0 - means not specified. And what about continuous of memory 
>>> chunks? Requesting  one big continues shared memory chunk is not 
>>> general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165648491
updated_at 2018-02-02 13:49:37


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

include/odp/api/spec/init.h
@@ -150,17 +157,29 @@ typedef struct odp_init_t {
worker and control masks do not overlap.
 */
const odp_cpumask_t *control_cpus;
+
/** Replacement for the default log fn */
odp_log_func_t log_fn;
+
/** Replacement for the default abort fn */
odp_abort_func_t abort_fn;
+
/** Unused features. These are hints to the ODP implementation that
 * the application will not use any APIs associated with these
 * features. Implementations may use this information to provide
 * optimized behavior. Results are undefined if applications assert
 * that a feature will not be used and it is used anyway.
 */
odp_feature_t not_used;
+
+   /** Shared memory parameters */
+   struct {
+   /** Maximum memory usage in bytes. This is the maximum
+*  amount of shared memory that application will reserve
+*  concurrently. */
+   uint64_t max_memory;


Comment:
I'll add comment about zero value. Although, I already changed documentation to 
require param_init() call and say that don't change values that you are not 
going to use (init sets it to zero).

> Petri Savolainen(psavol) wrote:
> OK


>> Petri Savolainen(psavol) wrote:
>> Very large align could result very large allocation and thus again system 
>> run out of memory (e.g. 1TB align => >1TB alloc).
>> 
>> OK. I'll change align max to be a power of two. 


>>> Petri Savolainen(psavol) wrote:
>>> Since actual amount of available memory typically depends on system load. 
>>> SHM implementation may not have a limit (max_size==0), or limit may be due 
>>> to address space (e.g. 40bit == 1TB). System might not have always the max 
>>> amount (e.g. 1TB) available. I limit validation test to assume that at 
>>> least 100MB should be always available.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 @muvarov `odp_shm_capability()` already tells the application the largest 
 contiguous size it can reserve (`max_size`) and the maximum number of 
 reserves it can do (`max_blocks`). This is just hinting to the 
 implementation the total size of all reserves the application will do.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> An additional `printf()` giving a bit more detail (i.e., `i` value) would 
> be useful here.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 
>> 1024 rather than 1000 x 1000? And if the implementation supports an even 
>> higher `max_align` why not test that as well?


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Why do you want to limit the size in a test that named 
>>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to 
>>> pick a specific target, but if it's non-zero why wouldn't you want to 
>>> try to reserve that much to see if the limit is true?


 muvarov wrote
 0 - means not specified. And what about continuous of memory chunks? 
 Requesting  one big continues shared memory chunk is not general 
 solution.


https://github.com/Linaro/odp/pull/446#discussion_r165579527
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;
+
+   printf("\nsize:  %" PRIu64 "\n", size);
+   printf("align: %" PRIu64 "\n", align);
+
+   shm = odp_shm_reserve("test_max_reserve", size, align, 0);
+   CU_ASSERT(shm != ODP_SHM_INVALID);
+
+   data = odp_shm_addr(shm);
+   CU_ASSERT(data != NULL);
+
+   if (data) {
+   memset(data, 0xde, size);
+   for (i = 0; i < size; i++) {
+   if (data[i] != 0xde) {
+   CU_FAIL("Data error");


Comment:
OK

> Petri Savolainen(psavol) wrote:
> Very large align could result very large allocation and thus again system run 
> out of memory (e.g. 1TB align => >1TB alloc).
> 
> OK. I'll change align max to be a power of two. 


>> Petri Savolainen(psavol) wrote:
>> Since actual amount of available memory typically depends on system load. 
>> SHM implementation may not have a limit (max_size==0), or limit may be due 
>> to address space (e.g. 40bit == 1TB). System might not have always the max 
>> amount (e.g. 1TB) available. I limit validation test to assume that at least 
>> 100MB should be always available.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> @muvarov `odp_shm_capability()` already tells the application the largest 
>>> contiguous size it can reserve (`max_size`) and the maximum number of 
>>> reserves it can do (`max_blocks`). This is just hinting to the 
>>> implementation the total size of all reserves the application will do.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 An additional `printf()` giving a bit more detail (i.e., `i` value) would 
 be useful here.


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 
> 1024 rather than 1000 x 1000? And if the implementation supports an even 
> higher `max_align` why not test that as well?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Why do you want to limit the size in a test that named 
>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to 
>> pick a specific target, but if it's non-zero why wouldn't you want to 
>> try to reserve that much to see if the limit is true?


>>> muvarov wrote
>>> 0 - means not specified. And what about continuous of memory chunks? 
>>> Requesting  one big continues shared memory chunk is not general 
>>> solution.


https://github.com/Linaro/odp/pull/446#discussion_r165579028
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
Very large align could result very large allocation and thus again system run 
out of memory (e.g. 1TB align => >1TB alloc).

OK. I'll change align max to be a power of two. 

> Petri Savolainen(psavol) wrote:
> Since actual amount of available memory typically depends on system load. SHM 
> implementation may not have a limit (max_size==0), or limit may be due to 
> address space (e.g. 40bit == 1TB). System might not have always the max 
> amount (e.g. 1TB) available. I limit validation test to assume that at least 
> 100MB should be always available.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> @muvarov `odp_shm_capability()` already tells the application the largest 
>> contiguous size it can reserve (`max_size`) and the maximum number of 
>> reserves it can do (`max_blocks`). This is just hinting to the 
>> implementation the total size of all reserves the application will do.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> An additional `printf()` giving a bit more detail (i.e., `i` value) would 
>>> be useful here.


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 1024 
 rather than 1000 x 1000? And if the implementation supports an even higher 
 `max_align` why not test that as well?


> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Why do you want to limit the size in a test that named 
> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick 
> a specific target, but if it's non-zero why wouldn't you want to try to 
> reserve that much to see if the limit is true?


>> muvarov wrote
>> 0 - means not specified. And what about continuous of memory chunks? 
>> Requesting  one big continues shared memory chunk is not general 
>> solution.


https://github.com/Linaro/odp/pull/446#discussion_r165578969
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Petri Savolainen(psavol) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
Since actual amount of available memory typically depends on system load. SHM 
implementation may not have a limit (max_size==0), or limit may be due to 
address space (e.g. 40bit == 1TB). System might not have always the max amount 
(e.g. 1TB) available. I limit validation test to assume that at least 100MB 
should be always available.

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> @muvarov `odp_shm_capability()` already tells the application the largest 
> contiguous size it can reserve (`max_size`) and the maximum number of 
> reserves it can do (`max_blocks`). This is just hinting to the implementation 
> the total size of all reserves the application will do.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> An additional `printf()` giving a bit more detail (i.e., `i` value) would be 
>> useful here.


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 1024 
>>> rather than 1000 x 1000? And if the implementation supports an even higher 
>>> `max_align` why not test that as well?


 Bill Fischofer(Bill-Fischofer-Linaro) wrote:
 Why do you want to limit the size in a test that named 
 `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick 
 a specific target, but if it's non-zero why wouldn't you want to try to 
 reserve that much to see if the limit is true?


> muvarov wrote
> 0 - means not specified. And what about continuous of memory chunks? 
> Requesting  one big continues shared memory chunk is not general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165573419
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

include/odp/api/spec/init.h
@@ -150,17 +157,29 @@ typedef struct odp_init_t {
worker and control masks do not overlap.
 */
const odp_cpumask_t *control_cpus;
+
/** Replacement for the default log fn */
odp_log_func_t log_fn;
+
/** Replacement for the default abort fn */
odp_abort_func_t abort_fn;
+
/** Unused features. These are hints to the ODP implementation that
 * the application will not use any APIs associated with these
 * features. Implementations may use this information to provide
 * optimized behavior. Results are undefined if applications assert
 * that a feature will not be used and it is used anyway.
 */
odp_feature_t not_used;
+
+   /** Shared memory parameters */
+   struct {
+   /** Maximum memory usage in bytes. This is the maximum
+*  amount of shared memory that application will reserve
+*  concurrently. */
+   uint64_t max_memory;


Comment:
@muvarov `odp_shm_capability()` already tells the application the largest 
contiguous size it can reserve (`max_size`) and the maximum number of reserves 
it can do (`max_blocks`). This is just hinting to the implementation the total 
size of all reserves the application will do.

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> An additional `printf()` giving a bit more detail (i.e., `i` value) would be 
> useful here.


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 1024 
>> rather than 1000 x 1000? And if the implementation supports an even higher 
>> `max_align` why not test that as well?


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> Why do you want to limit the size in a test that named 
>>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick a 
>>> specific target, but if it's non-zero why wouldn't you want to try to 
>>> reserve that much to see if the limit is true?


 muvarov wrote
 0 - means not specified. And what about continuous of memory chunks? 
 Requesting  one big continues shared memory chunk is not general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165527701
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;
+
+   printf("\nsize:  %" PRIu64 "\n", size);
+   printf("align: %" PRIu64 "\n", align);
+
+   shm = odp_shm_reserve("test_max_reserve", size, align, 0);
+   CU_ASSERT(shm != ODP_SHM_INVALID);
+
+   data = odp_shm_addr(shm);
+   CU_ASSERT(data != NULL);
+
+   if (data) {
+   memset(data, 0xde, size);
+   for (i = 0; i < size; i++) {
+   if (data[i] != 0xde) {
+   CU_FAIL("Data error");


Comment:
An additional `printf()` giving a bit more detail (i.e., `i` value) would be 
useful here.

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 1024 
> rather than 1000 x 1000? And if the implementation supports an even higher 
> `max_align` why not test that as well?


>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>> Why do you want to limit the size in a test that named 
>> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick a 
>> specific target, but if it's non-zero why wouldn't you want to try to 
>> reserve that much to see if the limit is true?


>>> muvarov wrote
>>> 0 - means not specified. And what about continuous of memory chunks? 
>>> Requesting  one big continues shared memory chunk is not general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165526484
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;
+
+   if (capa.max_align == 0 || capa.max_align > MEGA)
+   align = MEGA;


Comment:
Shouldn't `MEGA` be a power of 2 for alignment purposes? I.e., 1024 x 1024 
rather than 1000 x 1000? And if the implementation supports an even higher 
`max_align` why not test that as well?

> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
> Why do you want to limit the size in a test that named 
> `shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick a 
> specific target, but if it's non-zero why wouldn't you want to try to reserve 
> that much to see if the limit is true?


>> muvarov wrote
>> 0 - means not specified. And what about continuous of memory chunks? 
>> Requesting  one big continues shared memory chunk is not general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165525942
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
Bill Fischofer(Bill-Fischofer-Linaro) replied on github web page:

test/validation/api/shmem/shmem.c
@@ -212,6 +214,55 @@ void shmem_test_basic(void)
CU_ASSERT(0 == odp_shm_free(shm));
 }
 
+/*
+ * maximum size reservation
+ */
+static void shmem_test_max_reserve(void)
+{
+   odp_shm_capability_t capa;
+   odp_shm_t shm;
+   uint64_t size, align;
+   uint8_t *data;
+   uint64_t i;
+
+   memset(&capa, 0, sizeof(odp_shm_capability_t));
+   CU_ASSERT_FATAL(odp_shm_capability(&capa) == 0);
+
+   CU_ASSERT(capa.max_blocks > 0);
+
+   size  = capa.max_size;
+   align = capa.max_align;
+
+   /* Assuming that system has at least MAX_MEMORY_USED bytes available */
+   if (capa.max_size == 0 || capa.max_size > MAX_MEMORY_USED)
+   size = MAX_MEMORY_USED;


Comment:
Why do you want to limit the size in a test that named 
`shmem_test_max_reserve()`? If `capa.max_size == 0` then you have to pick a 
specific target, but if it's non-zero why wouldn't you want to try to reserve 
that much to see if the limit is true?

> muvarov wrote
> 0 - means not specified. And what about continuous of memory chunks? 
> Requesting  one big continues shared memory chunk is not general solution.


https://github.com/Linaro/odp/pull/446#discussion_r165525673
updated_at 2018-02-02 09:00:36


Re: [lng-odp] [PATCH API-NEXT v2] api: init: global shm memory size

2018-02-02 Thread Github ODP bot
muvarov replied on github web page:

include/odp/api/spec/init.h
@@ -150,17 +157,29 @@ typedef struct odp_init_t {
worker and control masks do not overlap.
 */
const odp_cpumask_t *control_cpus;
+
/** Replacement for the default log fn */
odp_log_func_t log_fn;
+
/** Replacement for the default abort fn */
odp_abort_func_t abort_fn;
+
/** Unused features. These are hints to the ODP implementation that
 * the application will not use any APIs associated with these
 * features. Implementations may use this information to provide
 * optimized behavior. Results are undefined if applications assert
 * that a feature will not be used and it is used anyway.
 */
odp_feature_t not_used;
+
+   /** Shared memory parameters */
+   struct {
+   /** Maximum memory usage in bytes. This is the maximum
+*  amount of shared memory that application will reserve
+*  concurrently. */
+   uint64_t max_memory;


Comment:
0 - means not specified. And what about continuous of memory chunks? Requesting 
 one big continues shared memory chunk is not general solution.

https://github.com/Linaro/odp/pull/446#discussion_r165412946
updated_at 2018-02-02 09:00:36