[ 
https://issues.apache.org/jira/browse/KAFKA-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17864935#comment-17864935
 ] 

kaushik srinivas commented on KAFKA-17101:
------------------------------------------

[~gharris1727] Below is the configuration. 

site1.ssl.keystore.filename=/etc/mirror-maker/secrets/ssl/site1/server.jks
site1.ssl.truststore.filename=/etc/mirror-maker/secrets/ssl/site1/trustchain.jks
site2.security.protocol=SSL
site1->site2.heartbeats.topic.replication.factor=3
site1.ssl.enabled.protocols=TLSv1.2,TLSv1.3
syslog=false
site2.ssl.keystore.filename=/etc/mirror-maker/secrets/ssl/site2/server.jks
site2.consumer.ssl.cipher.suites=
site1->site2.sync.topic.configs.interval.seconds=300
site1->site2.topics=.*_ALARM$,.*_INTERNAL_Intent_Changes$,product_INTERNAL_HAM_UPDATE$
site1->site2.replication.factor=3
site1->site2.emit.checkpoints.enabled=true
site2.ssl.key.password=productkeystore
tasks.max=1
site1.ssl.keystore.password=productkeystore
site1.ssl.truststore.location=/etc/kafka/shared/site1_truststore
site1.status.storage.replication.factor=3
site1.ssl.truststore.password=productkeystore
site1->site2.sync.topic.acls.enabled=false
site1->site2.refresh.topics.interval.seconds=300
site2.ssl.truststore.location=/etc/kafka/shared/site2_truststore
site2.ssl.truststore.filename=/etc/mirror-maker/secrets/ssl/site2/trustchain.jks
site1.ssl.protocol=TLSv1.2
site1->site2.replication.policy.class=RetainTopicNameReplicationPolicy
site1->site2.groups=.*
site1.security.protocol=SSL
site1->site2.groups.blacklist=console-consumer-.*, connect-.*, __.*
site1.config.storage.replication.factor=3
site1->site2.emit.hearbeats.enabled=true
site2.ssl.truststore.password=productkeystore
site1->site2.offset-syncs.topic.replication.factor=3
site1.ssl.keystore.location=/etc/kafka/shared/site1_keystore
site1.offset.storage.replication.factor=3
clusters=site1,site2
site1.bootstrap.servers=product-kafka-headless:9092
site2.ssl.protocol=TLSv1.2
site1->site2.refresh.groups.interval.seconds=300
site2.ssl.enabled=true
site2.ssl.enabled.protocols=TLSv1.2,TLSv1.3
site2.ssl.endpoint.identification.algorithm=
site1.ssl.cipher.suites=
site1->site2.checkpoints.topic.replication.factor=3
site1.producer.ssl.cipher.suites=
site2.ssl.keystore.location=/etc/kafka/shared/site2_keystore
site2.config.storage.replication.factor=3
site2.status.storage.replication.factor=3
site1.ssl.enabled=true
site1->site2.enabled=true
site1.ssl.endpoint.identification.algorithm=
site1.admin.ssl.cipher.suites=
site2.producer.ssl.cipher.suites=
site2.ssl.keystore.password=productkeystore
site2.ssl.cipher.suites=
site2.offset.storage.replication.factor=3
site1.ssl.key.password=productkeystore
site1.consumer.ssl.cipher.suites=
site2.bootstrap.servers=product-kafka-headless:9097
site1->site2.sync.topic.configs.enabled=true
site2.admin.ssl.cipher.suites=

> Mirror maker internal topics cleanup policy changes to 'delete' from 
> 'compact' 
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-17101
>                 URL: https://issues.apache.org/jira/browse/KAFKA-17101
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 3.4.1, 3.5.1, 3.6.1
>            Reporter: kaushik srinivas
>            Priority: Major
>
> Scenario/Setup details
> Kafka cluster 1: 3 replicas
> Kafka cluster 2: 3 replicas
> MM1 moving data from cluster 1 to cluster 2
> MM2 moving data from cluster 2 to cluster 1
> Sometimes with a reboot of the kafka cluster 1 and MM1 instance, we observe 
> MM failing to come up with below exception,
> {code:java}
> {"message":"DistributedHerder-connect-1-1 - 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder - [Worker 
> clientId=connect-1, groupId=site1-mm2] Uncaught exception in herder work 
> thread, exiting: "}}
> org.apache.kafka.common.config.ConfigException: Topic 
> 'mm2-offsets.site1.internal' supplied via the 'offset.storage.topic' property 
> is required to have 'cleanup.policy=compact' to guarantee consistency and 
> durability of source connector offsets, but found the topic currently has 
> 'cleanup.policy=delete'. Continuing would likely result in eventually losing 
> source connector offsets and problems restarting this Connect cluster in the 
> future. Change the 'offset.storage.topic' property in the Connect worker 
> configurations to use a topic with 'cleanup.policy=compact'. {code}
> Once the topic is altered with cleanup policy of compact. MM works just fine.
> This is happening on our setups sporadically and across varieties of 
> scenarios. Not been successful in identifying the exact reproduction steps as 
> of now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to