[ 
https://issues.apache.org/jira/browse/NIFI-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17737060#comment-17737060
 ] 

Giovanni commented on NIFI-11530:
---------------------------------

Update:

I reconfigured the repository to have a logical volume for each:

!image-2023-06-26-11-13-19-289.png!

The overall performance are improved.

However the provenance repository is still problematic:

!image-2023-06-26-11-15-00-379.png!

The other repos are ok though:

!image-2023-06-26-11-16-21-096.png!

!image-2023-06-26-11-16-35-791.png!

!image-2023-06-26-11-16-48-186.png!

 

> Disk full even with nifi.content.repository.archive.max.usage.percentage set 
> to 50%
> -----------------------------------------------------------------------------------
>
>                 Key: NIFI-11530
>                 URL: https://issues.apache.org/jira/browse/NIFI-11530
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.20.0
>         Environment: Ubuntu 20.04.5 LTS
>            Reporter: Giovanni
>            Priority: Major
>         Attachments: 20230605_disk_usage.jpg, content_archive.jpg, 
> flowfile_archive.jpg, image-2023-06-26-11-13-19-289.png, 
> image-2023-06-26-11-15-00-379.png, image-2023-06-26-11-16-21-096.png, 
> image-2023-06-26-11-16-35-791.png, image-2023-06-26-11-16-48-186.png, 
> jvm.jpg, nifi1-app.log, nifi2-app.log, nifi3-app.log, nifi_bug.jpg, 
> provenance_archive.jpg
>
>
> Nifi primary node reports disk full causing all nodes to stop working.
> Restarting nifi service does not resolve.
> Restarting the VM does not resolve.
> The only way to fix is to clean te content_repository dir:
> rm -rf ./nifi/content_repository/*
>  
> Unfortunately I have no logs of the issue ongoing.
>  
> UPDATE:
> I'm having the problem again.
> Every archive size is more than 50% on each node, with 70%+ peak on 
> coordinator node (see attachments).
> I'm also attaching nifi-app.log this time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to