Yeah. SystemD completely disregarded the limits our admins were setting and
imposed some threshold probably not even 10k on us.

On Fri, Apr 12, 2019 at 9:35 AM Joe Witt <[email protected]> wrote:

> so you found the prob?
>
> those numbers for nifi looked good.
>
> thanks
>
> On Fri, Apr 12, 2019, 9:34 AM Mike Thomsen <[email protected]> wrote:
>
>> And.... it was SystemD...
>>
>> On Fri, Apr 12, 2019 at 8:30 AM Mike Thomsen <[email protected]>
>> wrote:
>>
>>> When I do lsof -u nifi, it says the nifi user only has 5761 handles
>>> associated with it.
>>>
>>> One warning I saw on StackExchange said that sometimes SystemD subtly
>>> messes with this stuff on RHEL.
>>>
>>> On Fri, Apr 12, 2019 at 8:14 AM Mike Thomsen <[email protected]>
>>> wrote:
>>>
>>>> About 5600-5700 starting fresh. Got to about 6500-6800 before hitting
>>>> the ceiling.
>>>>
>>>> On Fri, Apr 12, 2019 at 7:30 AM Joe Witt <[email protected]> wrote:
>>>>
>>>>> mike
>>>>>
>>>>> lsof -p <pid>
>>>>>
>>>>> with the pid of the actual nifi process is probably better to look at
>>>>> for nifi resource handling observation.  what is that count.  yes the jars
>>>>> and such will all be loaded.  you can expect a few thousand off that.
>>>>>  then there are sockets and content and prov and flowfile....which adds a
>>>>> bit more.
>>>>>
>>>>> you should be able view the lsof input and get a pretty good idea of
>>>>> any unexpected file handles.
>>>>>
>>>>> thanks
>>>>>
>>>>> On Fri, Apr 12, 2019, 7:00 AM Mike Thomsen <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> I know you can increase the file handle limit in
>>>>>> /etc/security/limits.conf, but we're having a really weird issue where a
>>>>>> CentOS 7.5 box can handle a massive record set just fine and another
>>>>>> running CentOS 7.6 cannot.
>>>>>>
>>>>>> When I run *lsof | wc -l* on the 7.6 box after NiFi has been running
>>>>>> for a while, it prints out hundreds of thousands to a million as the 
>>>>>> value.
>>>>>> Every jar, class file, etc. that is part of the work folder is listed as 
>>>>>> an
>>>>>> open file and the content report oddly enough has maybe 10k-15k files at
>>>>>> the most during the ingestion of the largest pieces. So a limit of say 
>>>>>> 500k
>>>>>> open file handles feels like it should be **plenty**.
>>>>>>
>>>>>> There's a known bug in some releases of CentOS that causes PAM to
>>>>>> kill a session if the file handle limit is higher than 1M or unlimited.
>>>>>>
>>>>>> Anyone have suggestions on what might be happening here?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Mike
>>>>>>
>>>>>

Reply via email to