I would like to add additional information for manifests used to deploy strimzi.

These should be deployed on SOURCE cluster.
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/kafka/ephemeral-single-internal-listener-only.yaml
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/kafka/ephemeral-single-with-external-listener.yaml

This should be deployed on TARGET cluster.
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/mm2/plain-source-tls-target.yaml

Thanks,
Yu

On Sun, Feb 26, 2023 at 10:29 PM Yu Watanabe <yu.w.ten...@gmail.com> wrote:
>
> Hello.
>
> This looks like a strimzi configuration rather than kafka .  But I
> would like to help you anyway.
>
> > podIP keeps changing in GCP, hence when MirrorMaker tries to access the pod
> > using the older IP, the pod is not found .. hence the DisconnectException
>
> I think the short answer would be don't access pods directory access
> the "kafka-bootstrap" service resource for clients.
>
> https://strimzi.io/docs/operators/latest/configuring.html#ref-list-of-kafka-cluster-resources-str
>
> For the long answer, the answer will depend on your deployment model .
> For this  I assumed that you are having a deployment model that is
> attached to this reply. I have also attached sample manifests to
> workout MM2.
>
> Sample terraform code is below.
> https://github.com/yuwtennis/iac-samples/tree/main/terraform/google_cloud/multi_region_private_container_clusters
>
> Sample manifests are below.
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/mm2/plain-source-tls-target.yaml
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/kafka/ephemeral-single-with-external-listener.yaml
> https://github.com/yuwtennis/apache-kafka-apps/blob/master/strimzi/kafka/ephemeral-single-internal-listener-only.yaml
>
> After everything is deployed I have tested that producing below
> message in the broker pod on  source cluster side,
>
> $ bin/kafka-console-producer.sh --bootstrap-server
> my-cluster-kafka-bootstrap:9092 --topic topic-west
> >Hello
> >World
> >Hello again
>
> messages will be mirrored to the target cluster side and can be
> consumed in the broker pod on target cluster.
>
> $ bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 
> --list
> __consumer_offsets
> heartbeats
> mirrormaker2-cluster-configs
> mirrormaker2-cluster-offsets
> mirrormaker2-cluster-status
> my-source-cluster.checkpoints.internal
> my-source-cluster.topic-west
> $ bin/kafka-console-consumer.sh --bootstrap-server
> my-cluster-kafka-bootstrap:9092 --topic my-source-cluster.topic-west
> --offset earliest --partition 0
> Hello
> World
> Hello again
> ^CProcessed a total of 3 messages
>
> Hope it helps.
>
> Thanks,
> Yu
>
>
>
> On Fri, Feb 24, 2023 at 10:19 AM karan alang <karan.al...@gmail.com> wrote:
> >
> > I just figured out why there are so many Disconnection exceptions .. the
> > podIP keeps changing in GCP, hence when MirrorMaker tries to access the pod
> > using the older IP, the pod is not found .. hence the DisconnectException
> >
> > However, it is still not clear why the MirrorMaker is not transferring data
> > from src topic to target topic
> >
> > On Thu, Feb 23, 2023 at 3:04 PM karan alang <karan.al...@gmail.com> wrote:
> >
> > > Logs with the Disconnect Exception (when i set the loglevel to - INFO) :
> > >
> > >
> > > ```
> > >
> > > [SourceTaskOffsetCommitter-1]
> > >
> > > 2023-02-23 22:54:13,546 INFO [Consumer clientId=consumer-null-4,
> > > groupId=null] Error sending fetch request (sessionId=1509420532, 
> > > epoch=492)
> > > to node 2: (org.apache.kafka.clients.FetchSessionHandler)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > org.apache.kafka.common.errors.DisconnectException
> > >
> > > 2023-02-23 22:54:13,711 WARN [Consumer clientId=consumer-null-4,
> > > groupId=null] Connection to node 2
> > > (nossl-w-kafka-2.nossl-w-kafka-brokers.kafka.svc/10.98.129.75:9092) could
> > > not be established. Broker may not be available.
> > > (org.apache.kafka.clients.NetworkClient)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > 2023-02-23 22:54:13,711 INFO [Consumer clientId=consumer-null-4,
> > > groupId=null] Error sending fetch request (sessionId=1509420532,
> > > epoch=INITIAL) to node 2: (org.apache.kafka.clients.FetchSessionHandler)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > org.apache.kafka.common.errors.DisconnectException
> > >
> > > 2023-02-23 22:54:24,744 INFO
> > > WorkerSourceTask{id=my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0}
> > > flushing 0 outstanding messages for offset commit
> > > (org.apache.kafka.connect.runtime.WorkerSourceTask)
> > > [SourceTaskOffsetCommitter-1]
> > >
> > > 2023-02-23 22:54:37,029 INFO [Consumer clientId=consumer-null-4,
> > > groupId=null] Error sending fetch request (sessionId=1509420532,
> > > epoch=INITIAL) to node 2: (org.apache.kafka.clients.FetchSessionHandler)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > org.apache.kafka.common.errors.DisconnectException
> > >
> > > 2023-02-23 22:55:06,431 INFO
> > > WorkerSourceTask{id=my-source-cluster-west->my-target-cluster-east.MirrorHeartbeatConnector-0}
> > > flushing 0 outstanding messages for offset commit
> > > (org.apache.kafka.connect.runtime.WorkerSourceTask)
> > > [SourceTaskOffsetCommitter-1]
> > >
> > > 2023-02-23 22:55:07,449 INFO [Consumer clientId=consumer-null-4,
> > > groupId=null] Error sending fetch request (sessionId=1509420532,
> > > epoch=INITIAL) to node 2: (org.apache.kafka.clients.FetchSessionHandler)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > org.apache.kafka.common.errors.TimeoutException: Failed to send request
> > > after 30000 ms.
> > >
> > > 2023-02-23 22:55:10,029 INFO [Consumer clientId=consumer-null-4,
> > > groupId=null] Error sending fetch request (sessionId=1509420532,
> > > epoch=INITIAL) to node 2: (org.apache.kafka.clients.FetchSessionHandler)
> > > [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >
> > > org.apache.kafka.common.errors.DisconnectException
> > >
> > > 2023-02-23 22:55:24,745 INFO
> > > WorkerSourceTask{id=my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0}
> > > flushing 0 outstanding messages for offset commit
> > > (org.apache.kafka.connect.runtime.WorkerSourceTask)
> > > [SourceTaskOffsetCommitter-1]
> > >
> > > ```
> > >
> > > On Thu, Feb 23, 2023 at 2:12 PM karan alang <karan.al...@gmail.com> wrote:
> > >
> > >> Hello All,
> > >>
> > >> Anyone has installed KafkaMirrorMaker2 on GKE ?
> > >> Need some help to debug/resolve issues that I've been having.
> > >>
> > >> I've 2 clusters on GKE with strimzi kafka installed.
> > >> cluster1(nossl-e) on use-east1, cluster2(nossl-e) on us-west1.
> > >>
> > >> Both the clusters are Auto-pilot cluster (mentioning this, thought it
> > >> might not be relevant for the issues being faced)
> > >>
> > >> source cluster ->  nossl-w
> > >> target cluster -> nossl-e
> > >>
> > >> Currently(to get MM2 working), SSL & authorization are not enabled.
> > >>
> > >> KafkaMirrorMaker2  is installed on source cluster i.e. nossl-w
> > >>
> > >> Here is the yaml :
> > >> ```
> > >> apiVersion: kafka.strimzi.io/v1beta2
> > >> kind: KafkaMirrorMaker2
> > >> metadata:
> > >>   name: my-mirror-maker-2
> > >> spec:
> > >>   version: 3.0.0
> > >>   replicas: 1
> > >>   connectCluster: "my-target-cluster-east"
> > >>   clusters:
> > >>   - alias: "my-source-cluster-west"
> > >>     bootstrapServers: nossl-w-kafka-bootstrap:9092
> > >>   - alias: "my-target-cluster-east"
> > >>     bootstrapServers: xx.xx.xx.xx:9094
> > >>     config:
> > >>       # -1 means it will use the default replication factor configured in
> > >> the broker
> > >>       config.storage.replication.factor: -1
> > >>       offset.storage.replication.factor: -1
> > >>       status.storage.replication.factor: -1
> > >>   mirrors:
> > >>   - sourceCluster: "my-source-cluster-west"
> > >>     targetCluster: "my-target-cluster-east"
> > >>     sourceConnector:
> > >>       config:
> > >>         replication.factor: 1
> > >>         offset-syncs.topic.replication.factor: 1
> > >>         sync.topic.acls.enabled: "false"
> > >>     heartbeatConnector:
> > >>       config:
> > >>         heartbeats.topic.replication.factor: 1
> > >>     checkpointConnector:
> > >>       config:
> > >>         checkpoints.topic.replication.factor: 1
> > >>     topicsPattern: ".*"
> > >>     groupsPattern: ".*"
> > >>
> > >> ```
> > >>
> > >> While the pods & the svc come up, MirrorMaker is not transferring data
> > >> from source to target topics. (mmtest - in my case, that is the only 
> > >> topic
> > >> created on both the clusters)
> > >>
> > >> Here are the logs from the mirrormaker pod :
> > >>
> > >>
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Requesting metadata update for partition
> > >> mirrormaker2-cluster-offsets-7 due to error LEADER_NOT_AVAILABLE
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-offsets-21 from 226 to epoch 226 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-status-0 from 225 to epoch 225 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Requesting metadata update for partition
> > >> mirrormaker2-cluster-status-0 due to error LEADER_NOT_AVAILABLE
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-status-1 from 226 to epoch 226 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-status-4 from 226 to epoch 226 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-status-2 from 226 to epoch 226 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-status-3 from 225 to epoch 225 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Requesting metadata update for partition
> > >> mirrormaker2-cluster-status-3 due to error LEADER_NOT_AVAILABLE
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition mmtest-0 from 460 to
> > >> epoch 460 from new metadata (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition mmtest-2 from 416 to
> > >> epoch 416 from new metadata (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition mmtest-1 from 433 to
> > >> epoch 433 from new metadata (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updating last seen epoch for partition
> > >> mirrormaker2-cluster-configs-0 from 226 to epoch 226 from new metadata
> > >> (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,632 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Updated cluster metadata updateVersion 1597 to
> > >> MetadataCache{clusterId='yoD3YJaKTz6dvaVT2MfiOw',
> > >> nodes={1=nossl-w-kafka-1.nossl-w-kafka-brokers.kafka.svc:9092 (id: 1 
> > >> rack:
> > >> null), 2=nossl-w-kafka-2.nossl-w-kafka-brokers.kafka.svc:9092 (id: 2 
> > >> rack:
> > >> null)}, partitions=[PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-0, leader=Optional[1],
> > >> leaderEpoch=Optional[226], replicas=1, isr=1, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-status-3, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-2,
> > >> leader=Optional[2], leaderEpoch=Optional[226], replicas=2, isr=2,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-status-1, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-4, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mmtest-0, leader=Optional[2],
> > >> leaderEpoch=Optional[460], replicas=2,1,0, isr=1,2, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-6,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE, partition=mmtest-2,
> > >> leader=Optional[2], leaderEpoch=Optional[416], replicas=0,2,1, isr=1,2,
> > >> offlineReplicas=0), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-8, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-10, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-12,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-14, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-15,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-17, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-19, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-21,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-23, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-1, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-status-4,
> > >> leader=Optional[2], leaderEpoch=Optional[226], replicas=2, isr=2,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-3, leader=Optional[1],
> > >> leaderEpoch=Optional[226], replicas=1, isr=1, offlineReplicas=),
> > >> PartitionMetadata(error=NONE, partition=mmtest-1, leader=Optional[1],
> > >> leaderEpoch=Optional[433], replicas=1,0,2, isr=1,2, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-status-2,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-5, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-status-0, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-7, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-9,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-offsets-11, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-13, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-16, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-18,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=), PartitionMetadata(error=NONE,
> > >> partition=mirrormaker2-cluster-configs-0, leader=Optional[2],
> > >> leaderEpoch=Optional[226], replicas=2, isr=2, offlineReplicas=),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-20,
> > >> leader=Optional[2], leaderEpoch=Optional[226], replicas=2, isr=2,
> > >> offlineReplicas=), PartitionMetadata(error=LEADER_NOT_AVAILABLE,
> > >> partition=mirrormaker2-cluster-offsets-22, leader=Optional.empty,
> > >> leaderEpoch=Optional[225], replicas=0, isr=0, offlineReplicas=0),
> > >> PartitionMetadata(error=NONE, partition=mirrormaker2-cluster-offsets-24,
> > >> leader=Optional[1], leaderEpoch=Optional[226], replicas=1, isr=1,
> > >> offlineReplicas=)],
> > >> controller=nossl-w-kafka-1.nossl-w-kafka-brokers.kafka.svc:9092 (id: 1
> > >> rack: null)} (org.apache.kafka.clients.Metadata)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >>
> > >> 2023-02-23 22:05:54,666 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Received FETCH response from node 1 for request with header
> > >> RequestHeader(apiKey=FETCH, apiVersion=12,
> > >> clientId=consumer-mirrormaker2-cluster-1, correlationId=12072):
> > >> FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1476981493,
> > >> responses=[]) (org.apache.kafka.clients.NetworkClient) [KafkaBasedLog 
> > >> Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,666 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Node 1 sent an incremental fetch response with throttleTimeMs = 0 for
> > >> session 1476981493 with 0 response partition(s), 8 implied partition(s)
> > >> (org.apache.kafka.clients.FetchSessionHandler) [KafkaBasedLog Work 
> > >> Thread -
> > >> mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-2 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-8 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-17 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-23 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-5 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-11 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-20 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-14 at position FetchPosition{offset=2,
> > >> offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 34.75.2.112:9094 (id: 1 rack: null)], epoch=0}} to node 34.75.2.112:9094
> > >> (id: 1 rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Built incremental fetch (sessionId=1476981493, epoch=4013) for node 1.
> > >> Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out 
> > >> of
> > >> 8 partition(s) (org.apache.kafka.clients.FetchSessionHandler)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(),
> > >> implied=(mirrormaker2-cluster-offsets-2, mirrormaker2-cluster-offsets-8,
> > >> mirrormaker2-cluster-offsets-14, mirrormaker2-cluster-offsets-17,
> > >> mirrormaker2-cluster-offsets-23, mirrormaker2-cluster-offsets-5,
> > >> mirrormaker2-cluster-offsets-11, mirrormaker2-cluster-offsets-20)) to
> > >> broker 34.75.2.112:9094 (id: 1 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,667 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Sending FETCH request with header RequestHeader(apiKey=FETCH,
> > >> apiVersion=12, clientId=consumer-mirrormaker2-cluster-1,
> > >> correlationId=12075) and timeout 30000 to node 1:
> > >> FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1,
> > >> maxBytes=52428800, isolationLevel=0, sessionId=1476981493,
> > >> sessionEpoch=4013, topics=[], forgottenTopicsData=[], rackId='')
> > >> (org.apache.kafka.clients.NetworkClient) [KafkaBasedLog Work Thread -
> > >> mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Received FETCH response from node 0 for request with header
> > >> RequestHeader(apiKey=FETCH, apiVersion=12,
> > >> clientId=consumer-mirrormaker2-cluster-1, correlationId=12073):
> > >> FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1395933211,
> > >> responses=[]) (org.apache.kafka.clients.NetworkClient) [KafkaBasedLog 
> > >> Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Node 0 sent an incremental fetch response with throttleTimeMs = 0 for
> > >> session 1395933211 with 0 response partition(s), 9 implied partition(s)
> > >> (org.apache.kafka.clients.FetchSessionHandler) [KafkaBasedLog Work 
> > >> Thread -
> > >> mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-0 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-12 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-15 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-21 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-18 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-24 at position FetchPosition{offset=0,
> > >> offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-6 at position FetchPosition{offset=1,
> > >> offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-3 at position FetchPosition{offset=2,
> > >> offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Added READ_UNCOMMITTED fetch request for partition
> > >> mirrormaker2-cluster-offsets-9 at position FetchPosition{offset=1,
> > >> offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[
> > >> 35.231.229.1:9094 (id: 0 rack: null)], epoch=0}} to node
> > >> 35.231.229.1:9094 (id: 0 rack: null)
> > >> (org.apache.kafka.clients.consumer.internals.Fetcher) [KafkaBasedLog Work
> > >> Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Built incremental fetch (sessionId=1395933211, epoch=4010) for node 0.
> > >> Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out 
> > >> of
> > >> 9 partition(s) (org.apache.kafka.clients.FetchSessionHandler)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(),
> > >> implied=(mirrormaker2-cluster-offsets-0, mirrormaker2-cluster-offsets-6,
> > >> mirrormaker2-cluster-offsets-12, mirrormaker2-cluster-offsets-15,
> > >> mirrormaker2-cluster-offsets-21, mirrormaker2-cluster-offsets-3,
> > >> mirrormaker2-cluster-offsets-9, mirrormaker2-cluster-offsets-18,
> > >> mirrormaker2-cluster-offsets-24)) to broker 35.231.229.1:9094 (id: 0
> > >> rack: null) (org.apache.kafka.clients.consumer.internals.Fetcher)
> > >> [KafkaBasedLog Work Thread - mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,692 DEBUG [Consumer
> > >> clientId=consumer-mirrormaker2-cluster-1, groupId=mirrormaker2-cluster]
> > >> Sending FETCH request with header RequestHeader(apiKey=FETCH,
> > >> apiVersion=12, clientId=consumer-mirrormaker2-cluster-1,
> > >> correlationId=12076) and timeout 30000 to node 0:
> > >> FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1,
> > >> maxBytes=52428800, isolationLevel=0, sessionId=1395933211,
> > >> sessionEpoch=4010, topics=[], forgottenTopicsData=[], rackId='')
> > >> (org.apache.kafka.clients.NetworkClient) [KafkaBasedLog Work Thread -
> > >> mirrormaker2-cluster-offsets]
> > >>
> > >> 2023-02-23 22:05:54,699 DEBUG [Worker clientId=connect-1,
> > >> groupId=mirrormaker2-cluster] Received HEARTBEAT response from node
> > >> 2147483645 for request with header RequestHeader(apiKey=HEARTBEAT,
> > >> apiVersion=4, clientId=connect-1, correlationId=780):
> > >> HeartbeatResponseData(throttleTimeMs=0, errorCode=0)
> > >> (org.apache.kafka.clients.NetworkClient) [DistributedHerder-connect-1-1]
> > >>
> > >> 2023-02-23 22:05:54,699 DEBUG [Worker clientId=connect-1,
> > >> groupId=mirrormaker2-cluster] Received successful Heartbeat response
> > >> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator)
> > >> [DistributedHerder-connect-1-1]
> > >>
> > >> 2023-02-23 22:05:54,732 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Sending metadata request
> > >> MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-offsets'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-status'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='mmtest'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-configs')], allowAutoTopicCreation=true,
> > >> includeClusterAuthorizedOperations=false,
> > >> includeTopicAuthorizedOperations=false) to node
> > >> nossl-w-kafka-1.nossl-w-kafka-brokers.kafka.svc:9092 (id: 1 rack: null)
> > >> (org.apache.kafka.clients.NetworkClient)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >> 2023-02-23 22:05:54,732 DEBUG [Consumer clientId=consumer-null-5,
> > >> groupId=null] Sending METADATA request with header
> > >> RequestHeader(apiKey=METADATA, apiVersion=11, clientId=consumer-null-5,
> > >> correlationId=17039) and timeout 30000 to node 1:
> > >> MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-offsets'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-status'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='mmtest'),
> > >> MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA,
> > >> name='mirrormaker2-cluster-configs')], allowAutoTopicCreation=true,
> > >> includeClusterAuthorizedOperations=false,
> > >> includeTopicAuthorizedOperations=false)
> > >> (org.apache.kafka.clients.NetworkClient)
> > >> [task-thread-my-source-cluster-west->my-target-cluster-east.MirrorSourceConnector-0]
> > >> ```
> > >>
> > >>
> > >>  Intermittently, I'm seeing - 
> > >> org.apache.kafka.common.errors.DisconnectException
> > >> when MirrorMaker tries to connect to the broker on the source cluster, 
> > >> even
> > >> though MM2 is installed on the src connection.
> > >>
> > >> Data in the src topic (mmtest on nossl-w, region - us-west1) is not
> > >> moving to target cluster/topic (mmtest on nossl-e, region - us-east1)
> > >>
> > >> Pls note:
> > >> If i logon to the MirrorMaker pod(in cluster - nossl-w), as expected -
> > >> I'm able to access the kafka pods on the nossl-w.
> > >>
> > >> Any ideas on what needs to be done to debug/resolve this issue & enable
> > >> KafkaMirrorMaker2 working ?
> > >>
> > >> tia!
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >
>
>
>
> --
> Yu Watanabe
>
> linkedin: www.linkedin.com/in/yuwatanabe1/
> twitter:   twitter.com/yuwtennis



-- 
Yu Watanabe

linkedin: www.linkedin.com/in/yuwatanabe1/
twitter:   twitter.com/yuwtennis

Reply via email to