Hi Maxim,

Yes, it is working now.

Sorry, my suggestion was wrong.

On Thu, May 12, 2016 at 6:32 PM, Maxim Uvarov <[email protected]>
wrote:

> Hi nousi,
>
> can you please confirm that this patch solves issue:
>
> http://patches.opendataplane.org/patch/5947/
>
> Thank you,
> Maxim.
>
> On 05/11/16 15:24, nousi wrote:
>
>> Hi Maxim,
>>
>> plese make correction in patch, check sscanf with "!= 1" instead "== 1'.
>>
>> if (pos) {
>>                     *(pos - 1) = '\0';
>> +                       if (sscanf(pos, "@ %ld kB", &sz) != 1) {
>>                                 printf("%s %ld\n", __func__, sz);
>>                                fclose(file);
>>                               return sz * 1024;                       }
>>                }
>>
>>
>> On Wed, May 11, 2016 at 5:40 PM, nousi <[email protected] <mailto:
>> [email protected]>> wrote:
>>
>>     please have a look at the log messages of classification example.
>>
>>     am seeing huge_size as zero, is it correct ?
>>     in "/proc/meminfo" Hugepagesize is 2048 kB.
>>
>>     root@ubuntu-15-10:/home/linaro/linaro/odp/example/classifier#
>>     ./odp_classifier -i eno1 -m 0 -p
>>     "ODP_PMR_SIP_ADDR:192.168.10.11:FFFFFFFF:queue1" -p
>>     "ODP_PMR_SIP_ADDR:10.130.69.0:000000FF:queue2" -p
>>     "ODP_PMR_SIP_ADDR:10.130.68.0:FFFFFE00:queue3"
>>     odp_shm_reserve : page_sz 4096 alloc_size 34384 *huge_sze 0*
>>
>>     On Wed, May 11, 2016 at 1:59 PM, Maxim Uvarov
>>     <[email protected] <mailto:[email protected]>> wrote:
>>
>>
>>
>>         On 11 May 2016 at 09:04, nousi <[email protected]
>>         <mailto:[email protected]>> wrote:
>>
>>             Hi Maxim,
>>
>>             To return 2MB we need to change the logic in
>>             huge_page_size() (odP-system_info.c)function as below.
>>             Do not know about old logic.
>>
>>             I do not see huge page allocation error if I change the
>>             logic as below.
>>
>>             diff --git a/platform/linux-generic/odp_system_info.c
>>             b/platform/linux-generic/odp_system_info.c
>>             index 0f1f3c7..2173763 100644
>>             --- a/platform/linux-generic/odp_system_info.c
>>             +++ b/platform/linux-generic/odp_system_info.c
>>             @@ -95,7 +95,7 @@ static int huge_page_size(void)
>>                             if (sscanf(dirent->d_name, "hugepages-%i",
>>             &temp) != 1)
>>             continue;
>>
>>             -               if (temp > size)
>>             +               if (temp < size)
>>                                     size = temp;
>>                     }
>>
>>
>>
>>
>>         No, that is not correct. You request for HP and if huge page
>>         is less then requested size you limit requested size.
>>         Problem is solved with using default system huge pages instead
>>         of first found in sysfs.
>>         Try this patch:
>>         http://patches.opendataplane.org/patch/5934/
>>
>>         Maxim.
>>
>>             On Tue, May 10, 2016 at 6:38 PM, Maxim Uvarov
>>             <[email protected] <mailto:[email protected]>>
>>
>>             wrote:
>>
>>                 On 05/10/16 15:53, nousi wrote:
>>
>>                     value is zero in all the huge page parameters.
>>
>>
>>                 That is not the problem. Problem is that we have call:
>>
>>                 /**
>>                  * Huge page size in bytes
>>                  *
>>                  * @return Huge page size in bytes
>>                  */
>>                 uint64_t odp_sys_huge_page_size(void);
>>
>>                 which worked well for only one huge page sizes.
>>
>>
>>                 platform/linux-generic/odp_system_info.c
>>                 static int huge_page_size(void)
>>                 {
>>                         if (sscanf(dirent->d_name, "hugepages-%i",
>>                 &temp) != 1)
>>                             continue;
>>
>>
>>                 So because 1 GB pages in /proc lists first now then
>>                 odp_sys_huge_page_size() returns 1 GB, not 2 MB pages.
>>
>>                 In general we need to think how to fix it cleanly.
>>          Probably modify api of odp_sys_huge_page_size() to return
>>                 all available huge page sizes.
>>
>>                 So it looks like a bug, thank you for taking attention
>>                 about it.
>>
>>                 Maxim.
>>
>>
>>
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     ls -l
>>                     total 0
>>                     -r--r--r-- 1 root root 4096 May 10 18:15
>>                     free_hugepages
>>                     -rw-r--r-- 1 root root 4096 May 10 18:15 nr_hugepages
>>                     -rw-r--r-- 1 root root 4096 May 10 18:15
>>                     nr_hugepages_mempolicy
>>                     -rw-r--r-- 1 root root 4096 May 10 18:15
>>                     nr_overcommit_hugepages
>>                     -r--r--r-- 1 root root 4096 May 10 18:15
>>                     resv_hugepages
>>                     -r--r--r-- 1 root root 4096 May 10 18:15
>>                     surplus_hugepages
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat free_hugepages
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat nr_hugepages
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat nr_hugepages_mempolicy
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat nr_overcommit_hugepages
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat resv_hugepages
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>                     cat surplus_hugepages
>>                     0
>>                     root@ubuntu-15-10
>> :/sys/kernel/mm/hugepages/hugepages-1048576kB#
>>
>>                     On Tue, May 10, 2016 at 6:19 PM, nousi
>>                     <[email protected] <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>> wrote:
>>
>>                         yes, there are two directories under
>>                     "/sys/kernel/mm/hugepages"
>>
>>                     root@ubuntu-15-10:/sys/kernel/mm/hugepages# ls -l
>>                         total 0
>>                         drwxr-xr-x 2 root root 0 May 10 18:15
>>                     hugepages-1048576kB
>>                         drwxr-xr-x 2 root root 0 May 10 18:15
>>                     hugepages-2048kB
>>
>>
>>                         On Tue, May 10, 2016 at 6:13 PM, Maxim Uvarov
>>                         <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>> wrote:
>>
>>                             In my case there is 1 Gb huge page in:
>>                     /sys/kernel/mm/hugepages
>>
>>                             and odp_shm_reserve() just rounds all
>>                     allocations to 1 GB:
>>
>>                             #ifdef MAP_HUGETLB
>>                                 huge_sz = odp_sys_huge_page_size();
>>                     <--- 1 GB here
>>                     need_huge_page = (huge_sz && alloc_size > page_sz);
>>                                 /* munmap for huge pages requires
>>                     sizes round up by page */
>>                     alloc_hp_size = (size + align + (huge_sz - 1)) &
>>                     (-huge_sz);
>>                             #endif
>>
>>                             Do you see the same thing?
>>
>>                             Maxim.
>>
>>                             On 05/10/16 15:07, nousi wrote:
>>
>>                                 Hi Maxim,
>>
>>                                 Thanks for your support.
>>
>>                                 mamp return below error;
>>                                 "mmap: Cannot allocate memory"
>>                                 "mount -t hugetlbfs none
>>                     /mnt/hugetlbfs" also does not help.
>>
>>                                 Huge page allocation success in below
>>                     two calls after that
>>                                 it fails.
>>                                 1) odp_thread_globals
>>                                 2) odp_buffer_pools
>>
>>                                 Please have a look at the console logs
>>                     below.
>>
>>                     odp_shared_memory.c:299:odp_shm_reserve():
>>                     odp_thread_globals: huge page allocation success !
>>                     odp_shared_memory.c:299:odp_shm_reserve():
>>                     odp_buffer_pools: huge page allocation success !
>>                     odp_pool.c:104:odp_pool_init_global():
>>                                 Pool init global
>>                     odp_pool.c:105:odp_pool_init_global():
>>                     pool_entry_s size             8512
>>                     odp_pool.c:106:odp_pool_init_global():
>>                     pool_entry_t size             8512
>>                     odp_pool.c:107:odp_pool_init_global():
>>                     odp_buffer_hdr_t
>>                                 size 216
>>                     odp_pool.c:108:odp_pool_init_global():
>>                                 *mmap: Cannot allocate memory*
>>                     odp_queue.c:130:odp_queue_init_global():Queue init ...
>>                     odp_shared_memory.c:297:odp_shm_reserve(): odp_queues:
>>                     No huge pages, fall back to normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>
>>
>>                                 Recently I Moved from Ubuntu 14.04 to
>>                     15.10, in Ubuntu
>>                                 14.04 I had run calssfier example
>>                     without any huge page
>>                     allocation error.
>>
>>                                 On Tue, May 10, 2016 at 3:50 PM, Maxim
>>                     Uvarov
>>                                 <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                                 <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>                                     Does:
>>                                     mount -t hugetlbfs none /mnt/hugetlbfs
>>
>>                                     help?
>>
>>                                     Maxim.
>>
>>
>>                                     On 10 May 2016 at 13:16, Maxim Uvarov
>>                                 <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>                     looks like you have enough free HP. Which error
>>                                 returns
>>                     mmap()  on try with HP?
>>
>>                                         On 10 May 2016 at 11:57, nousi
>>                                 <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>
>>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>                     linaro@ubuntu-15-10:~$ cat
>>                     /proc/sys/vm/nr_hugepages
>>                         1024
>>                     linaro@ubuntu-15-10:~$
>>
>>                     linaro@ubuntu-15-10:~$ cat /proc/meminfo
>>                         MemTotal: 8061836 kB
>>                         MemFree:   470516 kB
>>                         MemAvailable: 1901932 kB
>>                         Buffers:    92600 kB
>>                         Cached: 1939696 kB
>>                         SwapCached:    7516 kB
>>                         Active: 3238960 kB
>>                         Inactive: 1804492 kB
>>                         Active(anon): 2712440 kB
>>                         Inactive(anon): 1069492 kB
>>                         Active(file):  526520 kB
>>                         Inactive(file):  735000 kB
>>                         Unevictable:       76 kB
>>                         Mlocked:       76 kB
>>                         SwapTotal: 16547836 kB
>>                         SwapFree:  16370160 kB
>>                         Dirty:    19816 kB
>>                         Writeback:        0 kB
>>                         AnonPages:  3004784 kB
>>                         Mapped:  679960 kB
>>                         Shmem:   770776 kB
>>                         Slab:  264692 kB
>>                         SReclaimable:  212172 kB
>>                         SUnreclaim:   52520 kB
>>                         KernelStack:    11952 kB
>>                         PageTables:   65780 kB
>>                         NFS_Unstable:       0 kB
>>                         Bounce:       0 kB
>>                         WritebackTmp:       0 kB
>>                         CommitLimit: 19530176 kB
>>                         Committed_AS:  11165432 kB
>>                         VmallocTotal:  34359738367 kB
>>                         VmallocUsed:   410416 kB
>>                         VmallocChunk:  34358947836 kB
>>                     HardwareCorrupted:    0 kB
>>                         AnonHugePages:   583680 kB
>>                         CmaTotal:       0 kB
>>                         CmaFree:        0 kB
>>                     HugePages_Total: 1024
>>                         HugePages_Free:    1022
>>                         HugePages_Rsvd:    1022
>>                         HugePages_Surp:       0
>>                         Hugepagesize:    2048 kB
>>                         DirectMap4k:   234400 kB
>>                         DirectMap2M:  6991872 kB
>>                         DirectMap1G:  2097152 kB
>>                     linaro@ubuntu-15-10:~$
>>
>>
>>                         On Tue, May 10, 2016 at 12:38 PM, Maxim Uvarov
>>                         <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                                 <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>>
>>                       wrote:
>>
>>                           Please put output for:
>>                           cat /proc/meminfo
>>                           cat /proc/sys/vm/nr_hugepages
>>
>>                           Thank you,
>>                           Maxim.
>>
>>
>>
>>                           On 10 May 2016 at 08:36, nousi
>>                                 <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                         <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>                               mmap is failing in "odp_shm_reserve"
>>                                 function
>>                     (odp_queue_init_global() --->
>>                     odp_shm_reserve()
>>                               ---> odp_shm_reserve())
>>
>>
>>                               debug logs:
>>
>>                     root@ubuntu-15-10
>> :/home/linaro/linaro/odp/example/classifier#
>>                     ./odp_classifier -i eno1 -m 0 -p
>>                     "ODP_PMR_SIP_ADDR:192.168.10.11 <tel:192.168.10.11>
>>                                 <tel:192.168.10.11
>>
>>                     <tel:192.168.10.11>>:FFFFFFFF:queue1"
>>                             -p "ODP_PMR_SIP_ADDR:10.130.69.0
>>                                 <http://10.130.69.0>:000000FF:queue2"
>>                             -p "ODP_PMR_SIP_ADDR:10.130.68.0
>>                                 <http://10.130.68.0>:FFFFFE00:queue3"
>>
>>                     odp_pool.c:104:odp_pool_init_global():
>>                                 Pool init global
>>                     odp_pool.c:105:odp_pool_init_global():
>>                     pool_entry_s size 8512
>>                     odp_pool.c:106:odp_pool_init_global():
>>                     pool_entry_t size 8512
>>                     odp_pool.c:107:odp_pool_init_global():
>>                     odp_buffer_hdr_t size 216
>>                     odp_pool.c:108:odp_pool_init_global():
>>                     odp_queue.c:130:odp_queue_init_global():Queue init
>>                                 ...
>>                     odp_shared_memory.c:296:odp_shm_reserve():odp_queues:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_queue.c:154:odp_queue_init_global():done
>>                     odp_queue.c:155:odp_queue_init_global():Queue init
>>                                 global
>>                     odp_queue.c:157:odp_queue_init_global(): struct
>>                     queue_entry_s size 320
>>                     odp_queue.c:159:odp_queue_init_global():
>>                     queue_entry_t size  320
>>                     odp_queue.c:160:odp_queue_init_global():
>>                     odp_schedule.c:145:odp_schedule_init_global():Schedule
>>                                 init ...
>>
>> odp_shared_memory.c:296:odp_shm_reserve():odp_scheduler:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>
>> odp_shared_memory.c:296:odp_shm_reserve():odp_sched_pool:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_schedule.c:226:odp_schedule_init_global():done
>>
>> odp_shared_memory.c:296:odp_shm_reserve():odp_pktio_entries:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_shared_memory.c:296:odp_shm_reserve():crypto_pool:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>
>> odp_shared_memory.c:296:odp_shm_reserve():shm_odp_cos_tbl:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>
>> odp_shared_memory.c:296:odp_shm_reserve():shm_odp_pmr_tbl:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                                 main :: odp_init_global done!
>>                     odp_classifier.c:500:main():main :: odp_init_local
>>                                 done!
>>                     odp_classifier.c:505:main():main ::
>>                     odp_shm_reserve done!
>>
>>                                 ODP system info
>>                     ---------------
>>                                 ODP API version: 1.10.0
>>                                 CPU model: Intel(R) Core(TM) i7-5600U CPU
>>                                 CPU freq (hz): 2600000000
>>                                 Cache line size: 64
>>                                 CPU count:       4
>>
>>                                 Running ODP appl: "odp_classifier"
>>                     -----------------
>>                                 Using IF:eno1
>>
>>                                 num worker threads: 2
>>                                 first CPU:          2
>>                                 cpu mask: 0xC
>>                     odp_shared_memory.c:296:odp_shm_reserve():packet_pool:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_pool.c:759:odp_pool_print():Pool info
>>                     odp_pool.c:760:odp_pool_print():---------
>>                     odp_pool.c:762:odp_pool_print(): pool     1
>>                     odp_pool.c:764:odp_pool_print(): name packet_pool
>>                     odp_pool.c:769:odp_pool_print(): pool type packet
>>                     odp_pool.c:771:odp_pool_print(): pool storage ODP
>>                                 managed shm handle 11
>>                     odp_pool.c:773:odp_pool_print(): pool status active
>>                     odp_pool.c:777:odp_pool_print(): pool opts
>>                     segmented, non-zeroized, created
>>                     odp_pool.c:778:odp_pool_print(): pool base
>>                     0x7f5091aab000
>>                     odp_pool.c:780:odp_pool_print(): pool size 1310720
>>                                 (320 pages)
>>                     odp_pool.c:781:odp_pool_print(): pool mdata base
>>                     0x7f5091bb5940
>>                     odp_pool.c:782:odp_pool_print(): udata size     0
>>                     odp_pool.c:783:odp_pool_print(): headroom     66
>>                     odp_pool.c:784:odp_pool_print(): tailroom     0
>>                     odp_pool.c:791:odp_pool_print(): seg length 1856
>>                     requested, 1936 used
>>                     odp_pool.c:793:odp_pool_print(): pkt length 1856
>>                     requested, 1936 used
>>                     odp_pool.c:795:odp_pool_print(): num bufs 564
>>                     odp_pool.c:797:odp_pool_print(): bufs available 564
>>                     odp_pool.c:798:odp_pool_print(): bufs in use     0
>>                     odp_pool.c:799:odp_pool_print(): buf allocs     0
>>                     odp_pool.c:800:odp_pool_print(): buf frees      0
>>                     odp_pool.c:801:odp_pool_print(): buf empty      0
>>                     odp_pool.c:803:odp_pool_print(): blk size 1936
>>                     odp_pool.c:805:odp_pool_print(): blks available 564
>>                     odp_pool.c:806:odp_pool_print(): blk allocs     0
>>                     odp_pool.c:807:odp_pool_print(): blk frees      0
>>                     odp_pool.c:808:odp_pool_print(): blk empty      0
>>                     odp_pool.c:809:odp_pool_print(): buf high wm
>>                                 value  282
>>                     odp_pool.c:810:odp_pool_print(): buf high wm count  0
>>                     odp_pool.c:811:odp_pool_print(): buf low wm
>>                                 value 141
>>                     odp_pool.c:812:odp_pool_print(): buf low wm count 0
>>                     odp_pool.c:813:odp_pool_print(): blk high wm
>>                                 value  282
>>                     odp_pool.c:814:odp_pool_print(): blk high wm count  0
>>                     odp_pool.c:815:odp_pool_print(): blk low wm
>>                                 value 141
>>                     odp_pool.c:816:odp_pool_print(): blk low wm count 0
>>                                 main :: odp_pool_print done!
>>                     odp_packet_io.c:230:setup_pktio_entry():eno1 uses
>>                     socket_mmap
>>                     created pktio:01, dev:eno1, queue
>>                                 mode (ATOMIC
>>                                 queues)
>>                     default pktio01
>>                     odp_shared_memory.c:296:odp_shm_reserve():
>>                     DefaultPool:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_shared_memory.c:296:odp_shm_reserve():
>>                     queue1Pool0:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_shared_memory.c:296:odp_shm_reserve():
>>                     queue2Pool1:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>                     odp_shared_memory.c:296:odp_shm_reserve():
>>                     queue3Pool2:
>>                                     :: No huge pages, fall back to
>>                                 normal pages,
>>                     check: /proc/sys/vm/nr_hugepages.
>>
>>                     ----------------------------------------
>>                     CLASSIFIER EXAMPLE STATISTICS
>>                     ----------------------------------------
>>                     CONFIGURATION
>>
>>                                 COS VALUE     MASK
>>                     ----------------------------------------
>>                                 queue1 192.168.10.11 <tel:192.168.10.11>
>>                                 <tel:192.168.10.11
>>
>>                     <tel:192.168.10.11>> FFFFFFFF
>>                               queue2 10.130.69.0 000000FF
>>                               queue3 10.130.68.0 FFFFFE00
>>
>>                               RECEIVED PACKETS
>>                     ----------------------------------------
>>                               queue1 |queue2 |queue3 |DefaultCos
>>                                 |Total Packets
>>                               queue  pool |queue  pool |queue pool
>>                                 |queue pool  |
>>                               845    845 |0      0 |0     0 |38
>>                          38   |883
>>
>>
>>                               On Mon, May 9, 2016 at 9:00 PM, Bill
>>                                 Fischofer
>>                               <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                             <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>
>>
>>                                   On Mon, May 9, 2016 at 6:57 AM, nousi
>>                                   <[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                                 <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>> wrote:
>>
>>
>>                                       Hi All,
>>
>>                     Pleas help me in running ODP
>>                                 classifier
>>                     example with huge pages.
>>                                       In ubuntu 15.10 by default
>>                                 interface
>>                     naming as "eno1" and value in
>>                     "/proc/sys/vm/nr_hugepages." is 1024.
>>                     classifier example program
>>                                 could not able
>>                                       to use huge pages even though
>>                     nr_hugepages
>>                     value is non zero.
>>                                       I could able to run the
>>                                 classier example,
>>                                       but it is not using huge pages.
>>                                       /*
>>                     console log is pasted below
>>                                 for you reference.
>>
>>                     */root@odp/example/classifier#
>>
>>                     ./odp_classifier -i eno1 -m 0 -p
>>                     "ODP_PMR_SIP_ADDR:192.168.10.11 <tel:192.168.10.11>
>>                                 <tel:192.168.10.11
>>
>>                     <tel:192.168.10.11>>:FFFFFFFF:queue1"
>>                                     -p
>>                     "ODP_PMR_SIP_ADDR:10.130.69.0
>>                                 <http://10.130.69.0>:000000FF:queue2"
>>                                     -p
>>                     "ODP_PMR_SIP_ADDR:10.130.68.0
>>                                 <http://10.130.68.0>:FFFFFE00:queue3"
>>
>>                     odp_pool.c:104:odp_pool_init_global():
>>                     Pool init global
>>                     odp_pool.c:105:odp_pool_init_global():
>>                     pool_entry_s size  8512
>>                     odp_pool.c:106:odp_pool_init_global():
>>                     pool_entry_t size  8512
>>                     odp_pool.c:107:odp_pool_init_global():
>>                     odp_buffer_hdr_t size 216
>>                     odp_pool.c:108:odp_pool_init_global():
>>                     odp_queue.c:130:odp_queue_init_global():Queue
>>                     init ...
>>                     odp_shared_memory.c:296:odp_shm_reserve():
>>                     odp_queues:
>>                         No huge pages, fall back
>>                                 to normal pages,
>>                         check:
>>                     /proc/sys/vm/nr_hugepages.
>>
>>
>>                                     This is an informational message
>>                                 saying that
>>                                     the linux-generic implementation
>>                                 was unable to
>>                     allocate huge pages so it's
>>                                 falling back to
>>                     normal pages.  I'm not sure why
>>                                 you're seeing
>>                                     that except that it seems that some
>>                     allocations may have been
>>                     successful (those in
>>                     odp_pool.c) while those for queue
>>                     initialization were not.
>>
>>                                     I'll let others who are more
>>                                 expert in this
>>                                     area chime in with some additional
>>                     thoughts.
>>
>>
>>
>>
>>                     Thanks & Regards,
>>                     /B.Nousilal,
>>                     /
>>
>>                     _______________________________________________
>>                     lng-odp mailing list
>>                     [email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                        <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>
>>                     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>>
>>
>>                               --  /Thanks & Regards,
>>                               B.Nousilal,
>>                               /
>>
>>                     _______________________________________________
>>                               lng-odp mailing list
>>                     [email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>
>>                             <mailto:[email protected]
>>                     <mailto:[email protected]>
>>                     <mailto:[email protected]
>>                     <mailto:[email protected]>>>
>>                     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>>
>>
>>                       --  /Thanks & Regards,
>>                       B.Nousilal,
>>                       /
>>
>>
>>
>>
>>
>>
>>                                 --      /Thanks & Regards,
>>                     B.Nousilal,
>>                                 /
>>
>>
>>
>>
>>
>>                         --     /Thanks & Regards,
>>                         B.Nousilal,
>>                         /
>>
>>
>>
>>
>>                     --                     /Thanks & Regards,
>>                     B.Nousilal,
>>                     /
>>
>>
>>
>>
>>
>>             --             /Thanks & Regards,
>>             B.Nousilal,
>>             /
>>
>>
>>
>>
>>
>>     --     /Thanks & Regards,
>>     B.Nousilal,
>>     /
>>
>>
>>
>>
>> --
>> /Thanks & Regards,
>> B.Nousilal,
>> /
>>
>
>


-- 


*Thanks & Regards,B.Nousilal,*
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to