Oh sorry, missed one of of the most important parts, we are using a 8-node 
cluster with nifi 1.11.3 – so perfectly up to date.

Cheers Josef

From: Bryan Bende <bbe...@gmail.com>
Reply to: "users@nifi.apache.org" <users@nifi.apache.org>
Date: Wednesday, 4 March 2020 at 12:57
To: "users@nifi.apache.org" <users@nifi.apache.org>
Subject: Re: FetchSFTP keeps files open (max open file descriptors reached)

Hello,

What version of nifi are you using?


On Wed, Mar 4, 2020 at 5:41 AM 
<josef.zahn...@swisscom.com<mailto:josef.zahn...@swisscom.com>> wrote:
Hi guys,

We have an issue with the FetchSFTP processor and the max open file 
descriptors. In short, it seems that the FetchSFTP keeps the file open 
“forever” on our Synology NAS, so we are reaching always the default max open 
files limit of 1024 from our Synlogy NAS if we try to fetch 500’000 small 1MB 
files (so in fact it’s not possible to read the files as everything is blocked 
after 1024 files).

We found no option to rise the limit of max open files on the Synology NAS (but 
that’s not NiFi’s fault 😉). We have also other linux machine with CentOS, but 
the behavior there isn’t exactly always the same. Sometimes the file 
descriptors get closed but sometimes as well not.

Synology has no lsof command, but this is how I’ve checked it:
user@nas-01:~$ sudo ls -l /proc/<SSHD SFTP process PID>/fd | wc -l
1024

Any comments how we can troubleshoot the issue?

Cheers Josef

--
Sent from Gmail Mobile

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to