[
https://issues.apache.org/jira/browse/NIFI-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18014639#comment-18014639
]
Zenkovac edited comment on NIFI-14864 at 8/18/25 3:32 PM:
----------------------------------------------------------
I added the max.partition.fetch.bytes = 10485760 and now i'm getting megabyte
files toping the 10.000 records pull from the controller.
I had already set max.partition.fetch.bytes to 10485760 but i think it wasnt
enough.
I will try this setting against kafka 2.8 and see results
was (Author: JIRAUSER294127):
I added the max.partition.fetch.bytes = 10485760 and now i'm getting megabyte
files toping the 10.000 pull from the controller.
I had already set max.partition.fetch.bytes to 10485760 but i think it wasnt
enough.
I will try this setting against kafka 2.8 and see results
> ConsumeKafka performance
> ------------------------
>
> Key: NIFI-14864
> URL: https://issues.apache.org/jira/browse/NIFI-14864
> Project: Apache NiFi
> Issue Type: Bug
> Components: Configuration
> Affects Versions: 2.5.0
> Environment: nifi 2.5, kafka server 2.8
> Reporter: Zenkovac
> Priority: Major
>
> switching from nifi 1.19 to 2.5 and using ConsumeKafka cant get to consume
> flowfiles with more than ~500 records per flowfile despite having millions of
> messages available in kafka topic.
> This has a penalty performance for me because I consume thousands of
> flowfiles vs a few in nifi 1.19 which means less disc i/o usage.
> this is my config:
> *Processing Strategy: RECORD*
> *Max Uncommitted Time* 10 sec
--
This message was sent by Atlassian Jira
(v8.20.10#820010)