[
https://issues.apache.org/jira/browse/BEAM-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922313#comment-16922313
]
Alexey Romanenko commented on BEAM-8121:
----------------------------------------
[~TauJan] Could you or your colleagues try to compare the results of these
simple pipelines (just read and write without other business logic you have) on
the same amount of data:
* Only read from Kafka *without* Reshuffle
* Read from Kafka *without* Reshuffle and write into BigQuery
* Only read from Kafka *with* Reshuffle
* Read from Kafka *with* Reshuffle and write into BigQuery
Perhaps, it would help to narrow down the root cause of this issue.
> Messages are not distributed per machines when consuming from Kafka topic
> with 1 partition
> ------------------------------------------------------------------------------------------
>
> Key: BEAM-8121
> URL: https://issues.apache.org/jira/browse/BEAM-8121
> Project: Beam
> Issue Type: Bug
> Components: io-java-kafka
> Affects Versions: 2.14.0
> Reporter: TJ
> Priority: Major
> Attachments: datalake-dataflow-cleaned.zip
>
>
> Messages are consumed from Kafka using KafkaIO. Each kafka topic contains
> only 1 partition. (That means that messages can be consumed only by one
> Consumer per 1 consumer group)
> When backlog of topic grows and system scales from 1 to X machines, all the
> messages seems to be executed on the same machine on which they are read.
> Due to that message throughput doesn't increase comparing X machines to 1
> machine. If one machine was reading 2K messages per s, X machines will be
> reading the same amount.
--
This message was sent by Atlassian Jira
(v8.3.2#803003)