I know you can increase the file handle limit in /etc/security/limits.conf,
but we're having a really weird issue where a CentOS 7.5 box can handle a
massive record set just fine and another running CentOS 7.6 cannot.

When I run *lsof | wc -l* on the 7.6 box after NiFi has been running for a
while, it prints out hundreds of thousands to a million as the value. Every
jar, class file, etc. that is part of the work folder is listed as an open
file and the content report oddly enough has maybe 10k-15k files at the
most during the ingestion of the largest pieces. So a limit of say 500k
open file handles feels like it should be **plenty**.

There's a known bug in some releases of CentOS that causes PAM to kill a
session if the file handle limit is higher than 1M or unlimited.

Anyone have suggestions on what might be happening here?

Thanks,

Mike

Reply via email to