https://bugs.linaro.org/show_bug.cgi?id=3954

            Bug ID: 3954
           Summary: shm allocator considered wasteful
           Product: OpenDataPlane - linux- generic reference
           Version: v1.15.0.0
          Hardware: Other
                OS: Linux
            Status: UNCONFIRMED
          Severity: enhancement
          Priority: ---
         Component: Shared Memory
          Assignee: christophe.mil...@linaro.org
          Reporter: josep.puigdem...@linaro.org
                CC: lng-odp@lists.linaro.org
  Target Milestone: ---

Shared memory objects in ODP can be reserved from "normal" memory of from huge
pages. If the requested size fits in a kernel page frame, that will be used,
otherwise huge pages will be preferred. See _odp_ishm_reserve in odp_ishm.c:
https://github.com/Linaro/odp/blob/6d91fe717d2e62e048fb8837a67cc1118a3113d1/platform/linux-generic/odp_ishm.c#L922

When huge pages are used, the actual amount of memory reserved will be a
multiple of the huge page size. See:
https://github.com/Linaro/odp/blob/6d91fe717d2e62e048fb8837a67cc1118a3113d1/platform/linux-generic/odp_ishm.c#L929

ODP does not seem to keep track of the extra memory allocated, which means that
for systems with 2MB huge pages, when a user requests a shared memory object of
3MB, 2 huge pages will be used, and in total 4MB of RAM will be reserved. In
this case 25% of the reserved memory will not be used.
For systems that have 1GB huge pages configured, however, most of the memory
would be wasted in this example.

The following table is an extract of the output of odp_shm_print_all() where
memory usage can be seen on a system that has 1GB huge pages:

ishm blocks allocated at: Memory allocation status:
    name                      flag len        user_len seq ref start        fd 
file
 0  odp_thread_globals        ..N  0x1000     3472     1   1  7f5626258000 3  
 1  _odp_pool_table           ..H  0x40000000 17850432 1   1  7f5580000000 4  
 2  _odp_queue_gbl            ..H  0x40000000 262272   1   1  7f5540000000 5  
 3  _odp_queue_rings          ..H  0x40000000 33554432 1   1  7f5500000000 6  
 4  odp_scheduler             ..H  0x40000000 8730624  1   1  7f54c0000000 7  
 5  odp_pktio_entries         ..H  0x40000000 360512   1   1  7f5480000000 8  
 6  crypto_pool               ..H  0x40000000 19800    1   1  7f5440000000 9  
 7  shm_odp_cos_tbl           ..H  0x40000000 20480    1   1  7f5400000000 10 
 8  shm_odp_pmr_tbl           ..H  0x40000000 114688   1   1  7f53c0000000 11 
 9  shm_odp_cls_queue_grp_tbl ..H  0x40000000 16384    1   1  7f5380000000 12 
10  pool_ring_0               ..H  0x40000000 4194432  1   1  7f5340000000 13 
11  ipsec_status_pool         ..H  0x40000000 786432   1   1  7f5300000000 14 
12  ipsec_sa_table            ..N  0x1000     2112     1   1  7f5626257000 15 
13  test_shmem                ..H  0x40000000 4120     7   1  7f52c0000000 16 

Apart from "len" and "user_len" being one in hex and the other in decimal form,
just to confuse the user a bit, it won't escape to the trained eye that ODP
reserved 1GB of memory for "crypto_pool" when only 19K will actually be used.
In fact, all shared memory areas in this example would fit in just 1 GB (not
considering proper alignment), but apparently 12GB have been reserved (90%
wasted).

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Reply via email to