[
https://issues.apache.org/jira/browse/NIFI-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17496967#comment-17496967
]
Joe Witt commented on NIFI-9572:
--------------------------------
Hello. I have again reviewed the data you've provided and it remains the case
that all the lsof output provided shows never exceeding around 7500 open file
handles at once yet your app log shows you hit max open files and your 'ulimit
-a' output shows you mean for it to be at least 50,000 or 100,000. To me this
again suggests that the ulimit settings for your actual 'nifi' user is not what
you mean for it to be. <another explanation is possible but lets rule this
out>
Please run the following commands on the command line as the 'nifi' user that
nifi actually runs as and show the command and results of both.
whoami
ulimit -a
It should show 'nifi' then should show the actual ulumits for that specific
user.
There is no evidence in any of the lsof outputs you showed that indicate you
came even close to 10,000 much less 50,000 or more.
Thanks
> Failed to index Provenance Events and (Too many Files)
> ------------------------------------------------------
>
> Key: NIFI-9572
> URL: https://issues.apache.org/jira/browse/NIFI-9572
> Project: Apache NiFi
> Issue Type: Bug
> Components: Core UI
> Affects Versions: 1.15.2
> Reporter: mayki
> Priority: Major
> Attachments: bootstrap.conf, nifi-app.log, nifi-app.log.tar.gz,
> nifi.properties, nifi_691106_pid.tar.gz
>
>
> Hello
> I have upgraded NIFI 1.15.2 since 2022/01/05
> No issue until this night 2022/01/13
> * nifi version 1.15.2
> * jdk-1.8.0_311
> And the limit is high
> {code:java}
> Last login: Fri Jan 14 09:57:06 CET 2022 on pts/2
> -bash-4.2@nifi$ ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) rg
> pending signals (-i) 63278
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 50000
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 10000
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
> {code}
>
> We got a lot error about provenance_repository, it fill our filesystem logs ..
>
> {code:java}
> 2022-01-14 10:19:00,963 ERROR [Index Provenance Events-2]
> o.a.n.p.index.lucene.EventIndexTask Failed to index Provenance Events
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
> at
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:877)
> at
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:891)
> at
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1468)
> at
> org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1444)
> at
> org.apache.nifi.provenance.lucene.LuceneEventIndexWriter.index(LuceneEventIndexWriter.java:70)
> at
> org.apache.nifi.provenance.index.lucene.EventIndexTask.index(EventIndexTask.java:202)
> at
> org.apache.nifi.provenance.index.lucene.EventIndexTask.run(EventIndexTask.java:113)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.nio.file.FileSystemException:
> /data/nifi/provenance_repository/lucene-8-index-1642145908399/_4_Lucene80_0.dvd:
> Too many open files
> {code}
>
>
> We expect upgrade all nifi instances to 1.15.2 to avoid log4j vulnerability.
> But it is impossible to do that if we got this error.
>
> Thanks for you help.
>
> Regards
>
>
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)