[ 
https://issues.apache.org/jira/browse/NIFI-13071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17948538#comment-17948538
 ] 

Filip Maretić edited comment on NIFI-13071 at 4/30/25 4:58 PM:
---------------------------------------------------------------

[~zhao] are you sure that you records are not getting deduplicated by the 
ClickHouse insert mechanism?
https://clickhouse.com/docs/guides/developer/deduplicating-inserts-on-retries#how-insert-deduplication-works


was (Author: JIRAUSER289203):
[~zhao] are you sure that you records are not getting deduplicated by the 
ClickHouse insert mechanism?

https://clickhouse.com/docs/guides/developer/deduplicating-inserts-on-retries

> Writing data using PutDatabaseRecord will result in frequent loss of data
> -------------------------------------------------------------------------
>
>                 Key: NIFI-13071
>                 URL: https://issues.apache.org/jira/browse/NIFI-13071
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Examples
>    Affects Versions: 1.25.0
>         Environment: Using clickhouse to synchronize data to clickhouse
>            Reporter: Sichao Zhao
>            Priority: Blocker
>         Attachments: image-2024-04-19-14-54-43-213.png
>
>
> When I save data in bulk into the clickhouse, there is no error, but there is 
> usually a lot of data loss in each data stream, about 10% of the data will be 
> lost,My template is as follows: 
> !image-2024-04-19-14-54-43-213.png|width=568,height=162!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to