simonbence commented on code in PR #8152:
URL: https://github.com/apache/nifi/pull/8152#discussion_r1424876658


##########
nifi-docs/src/main/asciidoc/administration-guide.adoc:
##########
@@ -3538,15 +3538,19 @@ of 576.
 
 ==== Persistent repository
 
-If the value of the property 
`nifi.components.status.repository.implementation` is 
`EmbeddedQuestDbStatusHistoryRepository`, the
-status history data will be stored to the disk in a persistent manner. Data 
will be kept between restarts.
+If the value of the property 
`nifi.components.status.repository.implementation` is 
`org.apache.nifi.controller.status.history.questdb.EmbeddedQuestDbStatusHistoryRepository`,
 the
+status history data will be stored to the disk in a persistent manner. Data 
will be kept between restarts. In order to use persistent repository, the 
QuestDB NAR must be re-built with the `include-questdb` profiles enabled.
 
 |====
 |*Property*|*Description*
 |`nifi.status.repository.questdb.persist.node.days`|The number of days the 
node status data (such as Repository disk space free, garbage collection 
information, etc.) will be kept. The default values
 is `14`.
 |`nifi.status.repository.questdb.persist.component.days`|The number of days 
the component status data (i.e., stats for each Processor, Connection, etc.) 
will be kept. The default value is `3`.
 |`nifi.status.repository.questdb.persist.location`|The location of the 
persistent Status History Repository. The default value is 
`./status_repository`.
+|`nifi.status.repository.questdb.persist.location.backup`|The location of the 
database backup in case the database is being corrupted and recreated. The 
default value is `./status_repository_backup`.
+|`nifi.status.repository.questdb.persist.batchsize`|The QuestDb based status 
history repository persists the collected status information in batches. The 
batch size determines the maximum number of persisted status records at a given 
time. The default value is `1000`.

Review Comment:
   In most situations I think the default settings should work well and this is 
the reason they were not exposed in the original implementation. In cases where 
there are high number of processor, or the node is under high load I find it 
useful to give the opportunity to fine tune. What I am thinking about for 
example is in case of 10-30k of processors might use bigger batch size for 
persisting. These can be more extreme cases but I find it useful to have the 
opportunity for affecting on the behaviour.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to