[
https://issues.apache.org/jira/browse/NIFI-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393951#comment-17393951
]
Michal Šunka edited comment on NIFI-9012 at 8/5/21, 3:16 PM:
-------------------------------------------------------------
[~pvillard] - Are you interested in anything specific in log files? To be
honest I am not wiling to share those not-anonymized and doing so would take
quite some time. But in the logs for 80 minutes prior first restart and up to
the third restart no error stacktraces are present (aside from those concerning
unavailable site-to-site server during restart).
Here are anonymized [^nifi.properties].
Concerning repository:
{{
/opt/nifi/nifi-current$ ls -l | grep repo
drwxr-xr-x. 1026 nifi nifi 20480 Aug 4 12:58 content_repository
drwxr-xr-x. 2 nifi nifi 100 Aug 4 12:58 database_repository
drwxr-xr-x. 4 nifi nifi 52 Aug 5 11:57 flowfile_repository
drwxr-xr-x. 4 nifi nifi 8192 Aug 5 11:55 provenance_repository
}}
But it is from now when all is OK. Or you had something different in mind?
And container as a whole has no disk limit I am aware of. My other instance
(with similar settings, version 1.12.1) had to made 10 GB of logs to stop
working due to the no-space errors. Some container settings:
[^docker_inspect_excerpt.txt]
was (Author: raocz):
[~pvillard] - Are you interested in anything specific in log files? To be
honest I am not wiling to share those not-anonymized and doing so would take
quite some time. But in the logs for 80 minutes prior first restart and up to
the third restart no error stacktraces are present (aside from those concerning
unavailable site-to-site server during restart).
Here are anonymized [^nifi.properties].
{{Concerning repository:
/opt/nifi/nifi-current$ ls -l | grep repo
drwxr-xr-x. 1026 nifi nifi 20480 Aug 4 12:58 content_repository
drwxr-xr-x. 2 nifi nifi 100 Aug 4 12:58 database_repository
drwxr-xr-x. 4 nifi nifi 52 Aug 5 11:57 flowfile_repository
drwxr-xr-x. 4 nifi nifi 8192 Aug 5 11:55 provenance_repository
}}
But it is from now when all is OK. Or you had something different in mind?
And container as a whole has no disk limit I am aware of. My other instance
(with similar settings, version 1.12.1) had to made 10 GB of logs to stop
working due to the no-space errors. Some container settings:
[^docker_inspect_excerpt.txt]
> InvokeHttps
> -----------
>
> Key: NIFI-9012
> URL: https://issues.apache.org/jira/browse/NIFI-9012
> Project: Apache NiFi
> Issue Type: Bug
> Affects Versions: 1.14.0
> Environment: Deckerized NiFi with container from Docker Hub, no
> custom processors
> Reporter: Michal Šunka
> Priority: Major
> Attachments: docker_inspect_excerpt.txt, nifi.properties,
> thread-dump.txt, thread-dump2.txt
>
>
> I was running a NiFi version 1.12.1 without any problems with this regard.
> Yesterday I updated to version 1.14.0 (really wanted to get fix
> https://issues.apache.org/jira/browse/NIFI-7849). But now I am in a way worse
> state.
> I use InvokeHTTP processor to self-connect to NiFi API to get count of
> flowfiles being processed by given processor group.
> However this InvokeHTTP started getting stuck. Processor would just spin up a
> new thread and this will sit there indefinitely (six hours at-least),
> blocking incoming flowfile. After forceful termination of processor (simple
> stop is not enough) the flowfile would get released and next start will
> consume the flowfile just fine. (Timeouts are: connection 5 seconds, read 10
> seconds and idle 1 minute.) And of course log is clear of any errors which
> seems to be connected to this...
> I had noticed similar issue also with other processors, namely MergeContent
> processor or ExecuteSQLProcessor - e.g. flowfile gets stuck for 15 minutes
> (merge bin max age is 5 seconds). After forced termination all starts working
> (for some time - a few dozens of minutes).
> Oh and in all cases the thread hangs in terminating state (red thread icon
> with a digit in parenthesis; the digit sits there all the time and wont
> disappear). Only restart of whole NiFi clears the issue. For some time.
>
> Maybe some disk I/O issue?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)