Hi Anatoly,

In my test for DPDK 18.11, I notice the following:

1. Using --legacy-mem switch, DPDK still opens 1 fd/huge page.  In essence, it 
is the same with or without this switch.

2. Using --single-file-segments does reduce the open fd to 1.  However, for 
each huge page that is in-use, a .lock file is opened.  As a result, it still 
uses up a large number of fd's.

Thanks.
--  edwin

-----Original Message-----
From: Iain Barker 
Sent: Wednesday, February 27, 2019 8:57 AM
To: Burakov, Anatoly <anatoly.bura...@intel.com>; Wiles, Keith 
<keith.wi...@intel.com>
Cc: dev@dpdk.org; Edwin Leung <edwin.le...@oracle.com>
Subject: RE: [dpdk-dev] Question about DPDK hugepage fd change

Original Message from: Burakov, Anatoly [mailto:anatoly.bura...@intel.com] 

>I just realized that, unless you're using --legacy-mem switch, one 
>other way to alleviate the issue would be to use --single-file-segments 
>option. This will still store the fd's, however it will only do so per 
>memseg list, not per page. So, instead of 1000's of fd's with 2MB 
>pages, you'd end up with under 10. Hope this helps!

Hi Anatoly,

Thanks for the update and suggestion. We did try using --single-file-segments 
previously. Although it lowers the amount of fd's allocated for tracking the 
segments as you noted, there is still a problem.

It seems that a .lock file is created for each huge page, not for each segment. 
So with 2MB pages the glibc limit of 1024 fd's is still exhausted quickly if 
there is ~2GB of 2MB huge pages.

Edwin can provide more details from his testing. In our case much sooner, as we 
already use >500 fd's for the application, just 1GB of 2MB huge pages is enough 
to hit the fd limit due to the .lock files.

Thanks.

Reply via email to