Thanks Shreyansh. My comment prepended with [P01]
2017-06-20 19:10 GMT+08:00 Shreyansh Jain <[email protected]>: > Hello Paul, > > Some comments inline, prepended with [SJ01]: > (Would it be possible for you to send text mails to this mailing list - it > helps in keeping replies contextual) > > From: Paul Tsvika [mailto:[email protected]] > Sent: Tuesday, June 20, 2017 2:27 PM > To: Shreyansh Jain <[email protected]> > Cc: [email protected] > Subject: Re: [dpdk-users] Run testpmd application and encountered no free > page issues > > Hi Shreyansh, > > Thanks for the reply. > > By using the set-up script in DPDK: > > the hugepage memory looks to be allocated properly: > > cat /proc/meminfo > > HugePages_Total: 240 > HugePages_Free: 240 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > > > and then i run the commands below in sequence: > > $ sudo modprobe uio > $ sudo insmod ./build/kmod/igb_uio.ko > $ sudo ./usertools/dpdk-devbind.py -b igb_uio xxx:xx.0 xxx:xx.1 ( > xxx.xx.0, xxx.xx.1 <-- 10G port pci address ) > $ sudo ./build/app/testpmd -l 1,2,3 -n 2 -- -i > > EAL: Detected 16 lcore(s) > EAL: No free hugepages reported in hugepages-1048576kB > > [SJ01] DPDK iterates over all directories in /sys/kernel/mm/hugepages > which contains one directory for each supported hugepage size. This log > above is appearing because nothing was found for a directory named > "hugepages-1048576kB". That is probably because you have '0' hugepages of > size 1G in your system. This should not impact you until you are were > expecting 1G hugepage size. > > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: PCI device 0000:03:00.0 on NUMA socket 0 > EAL: probe driver: 8086:15ad net_ixgbe > EAL: PCI device 0000:03:00.1 on NUMA socket 0 > EAL: probe driver: 8086:15ad net_ixgbe > EAL: PCI device 0000:05:00.0 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > EAL: PCI device 0000:05:00.1 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > Interactive-mode selected > USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, > socket=0 > Configuring Port 0 (socket 0) > PMD: ixgbe_dev_link_status_print(): Port 0: Link Down > Port 0: 00:25:90:5C:E9:58 > Configuring Port 1 (socket 0) > PMD: ixgbe_dev_link_status_print(): Port 1: Link Down > Port 1: 00:25:90:5C:E9:59 > Checking link statuses... > Done > testpmd> > > > Questions below: > > 1. It looks like the application can enter the interactive mode. however, > I have no idea why EAL: No free hugepages reported in hugepages-1048576kB > keeps popping out. > > [SJ01] Lets ignore that for while. You have about 240 pages of 2M = > ~480Mb. Application is demanding ~339Mb (n=163456, size=2176) > > > 2. Is there any reason why the ports went down why I run the commands? The > Link was up before running it. > > [SJ01] This I am not sure. If the links were up _before_, they should be > up now as well. Just out of curiosity, how did you check that links were up > before? These links are not assigned to Linux kernel and are not visible in > the ifconfig list. > *[P01] I don't know why you mentioned that these links are not assigned to Linux kernel and not visible in the ifconfig list. **ixgbe is the inbox driver of the kernel and eno3 and eno4 ( 10G ports ) appear in ifconfig list.* *However, eno3 and eno4 gone away ( disappeared in ifconfig ) when running this command. I am still investigating the issue. * > > 3. The hugepage will be gone after running this application. > > HugePages_Total: 240 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > > It became this. However, how can i free it again ? I tried to umount and > mount again but it did not work. > > [SJ01] Just go and delete all files created in /mnt/hugepages/ folder. You > would have "HugePages_Free" available again. But, only if you have the > application stopped. > > > Please advice if any. > > [SJ01] Above is all what I know. I have no idea why your links are > appearing down. > > - > Shreyansh > Paul -- P.T
