[ 
https://issues.apache.org/jira/browse/KAFKA-16344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17824385#comment-17824385
 ] 

Janardhana Gopalachar edited comment on KAFKA-16344 at 3/7/24 2:05 PM:
-----------------------------------------------------------------------

HI [~gharris1727] 

Currently in our mirror maker spec  we have offset.lag.max: 0, so should it be 
set to max 100, Will this reduce the through put on source topics or target 
topic, basically couldnt find much documentation for the same. 

Is the offset.lag.max value set to 0 is contributing for CPU load ?

what would be value that could be set if it to be more than 100? 
{code:java}
sourceConnector:
        maxTasks: 12
        settings:
          consumer.max.poll.records: 2000
          consumer.ssl.endpoint.identification.algorithm: ""
          offset-syncs.topic.location: target
          offset-syncs.topic.replication.factor: 3
          offset.lag.max: 0
          producer.ssl.endpoint.identification.algorithm: ""
          refresh.topics.interval.seconds: 300
          replication.factor: 3
          replication.policy.class: 
org.apache.kafka.connect.mirror.IdentityReplicationPolicy
          replication.policy.separator: ""
          ssl.endpoint.identification.algorithm: ""
          sync.topic.acls.enabled: "false"{code}
and along with we have below questions 

1. What is the proportion of mm2-offsetsyncsinternal topic writes to overall 
MM2 traffic?

2. Is there a way to tune the internal topic writes with increased traffic MM2?

3. I want to understand the expected mm2-offsetsyncsinternal write TPS for a 
given amount of MM2 traffic. Are there any tunable parameters to reduce these 
writes, and what are the consequences of tuning if any?

4. In a larger Kafka cluster, if a single broker is overloaded with 
mm2-offsetsyncsinternal traffic, can it lead to a broker crash? Are there any 
guidelines available for such scenarios? Currently, we have 6 brokers with 24K 
MM2 traffic, and internal writes are at 10K, resulting in a 20% CPU increase on 
one broker.

5. Are there any limitations on Kafka brokers scaling , I mean how much kafka 
broker can be expanded in single Kakfa cluster due to the 
mm2-offsetsyncsinternal topic?

6. How can the system be dimensioned to handle MM2 internal topic writes 
effectively? Are there any recommended figures available? For instance, for a 
given amount of traffic (X), what percentage increase in CPU (Y) should each 
broker have to handle MM2 internal topic writes? Note that in other pods, this 
resource may not be utilized.

Regards

Jana


was (Author: JIRAUSER301625):
HI [~gharris1727] 

Below are few questions we have 

1. What is the proportion of mm2-offsetsyncsinternal topic writes to overall 
MM2 traffic?

2. Is there a way to tune the internal topic writes with increased traffic MM2?

3. I want to understand the expected mm2-offsetsyncsinternal write TPS for a 
given amount of MM2 traffic. Are there any tunable parameters to reduce these 
writes, and what are the consequences of tuning if any?

4. In a larger Kafka cluster, if a single broker is overloaded with 
mm2-offsetsyncsinternal traffic, can it lead to a broker crash? Are there any 
guidelines available for such scenarios? Currently, we have 6 brokers with 24K 
MM2 traffic, and internal writes are at 10K, resulting in a 20% CPU increase on 
one broker.

5. Are there any limitations on Kafka brokers scaling , I mean how much kafka 
broker can be expanded in single Kakfa cluster due to the 
mm2-offsetsyncsinternal topic?

6. How can the system be dimensioned to handle MM2 internal topic writes 
effectively? Are there any recommended figures available? For instance, for a 
given amount of traffic (X), what percentage increase in CPU (Y) should each 
broker have to handle MM2 internal topic writes? Note that in other pods, this 
resource may not be utilized.

Regards

Jana

> Internal topic mm2-offset-syncs<clustername>internal created with single 
> partition is putting more load on the broker
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-16344
>                 URL: https://issues.apache.org/jira/browse/KAFKA-16344
>             Project: Kafka
>          Issue Type: Bug
>          Components: connect
>    Affects Versions: 3.5.1
>            Reporter: Janardhana Gopalachar
>            Priority: Major
>
> We are using Kafka 3.5.1 version, we see that the internal topic created by 
> mirrormaker 
> mm2-offset-syncs<clustername>internal is created with single partition due to 
> which the CPU load on the broker which will be leader for this partition is 
> increased compared to other brokers. Can multiple partitions be  created for 
> the topic so that the CPU load would get distributed 
>  
> Topic: mm2-offset-syncscluster-ainternal    TopicId: XRvTDbogT8ytNhqX2YTyrA   
>  PartitionCount: 1ReplicationFactor: 3    Configs: 
> min.insync.replicas=2,cleanup.policy=compact,message.format.version=3.0-IV1
>     Topic: mm2-offset-syncscluster-ainternal    Partition: 0    Leader: 2    
> Replicas: 2,1,0    Isr: 2,1,0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to