[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931473#comment-16931473
 ] 

Christian Hagel edited comment on KAFKA-7500 at 9/17/19 1:50 PM:
-----------------------------------------------------------------

[~ryannedolan] increasing the number of tasks and playing around with the 
producer.buffer.memory variable helped a lot to get rid of the above-mentioned 
error.

 

However, I think I stumbled over a bug with the topic config syncing. When 
starting the mm2 with the following config:
{code:java}
{
    "connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
    "source.cluster.alias": "",
    "replication.policy.separator": "",
    "target.cluster.alias": "B",
    "source.cluster.bootstrap.servers": "ip:9092",
    "target.cluster.bootstrap.servers": "ip:9093",
    "sync.topic.acls.enabled": "false",
    "replication.factor": "1",
    "internal.topic.replication.factor": "1",
    "topics": ".*",
    "enabled": "true",
    "rename.topics": "false",
    "replication.factor": 1,
    "refresh.topics": "true",
    "refresh.groups": "true",
    "sync.topic.configs": "true"
  }
{code}
it indeed does create all topics from the source cluster in the target cluster, 
with the correct number of partitions.

Other config parameters like cleanup.policy remain in the cluster default. I 
thought at first insufficient acls might be the cause, but I replicated the 
behavior also with a simple docker-compose setup. Did I miss a configuration? 
Or is this even the intended behavior


was (Author: chridtian.hagel):
[~ryannedolan] increasing the number of tasks and playing around with the 
producer.buffer.memory variable helped a lot to get rid of the above-mentioned 
error.

 

However, I think I stumbled over a bug with the topic config syncing. When 
starting the mm2 with the following config:
{code:java}
{
    "connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
    "source.cluster.alias": "",
    "replication.policy.separator": "",
    "target.cluster.alias": "B",
    "source.cluster.bootstrap.servers": "ip:9092",
    "target.cluster.bootstrap.servers": "ip:9093",
    "sync.topic.acls.enabled": "false",
    "replication.factor": "1",
    "internal.topic.replication.factor": "1",
    "topics": ".*",
    "enabled": "true",
    "rename.topics": "false",
    "replication.factor": 1,
    "refresh.topics": "true",
    "refresh.groups": "true",
    "sync.topic.configs": "true"
  }
{code}
it indeed does create all topics from the source cluster in the target cluster, 
with the correct number of partitions.

Other config parameters like cleanup.policy remain in the cluster default. I 
thought at first insufficient acls might be the cause, but I replicated the 
behavior also with a simple docker-compose setup. Did I miss a configuration? 
Or is this even the intendet behavior

> MirrorMaker 2.0 (KIP-382)
> -------------------------
>
>                 Key: KAFKA-7500
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7500
>             Project: Kafka
>          Issue Type: New Feature
>          Components: KafkaConnect, mirrormaker
>    Affects Versions: 2.4.0
>            Reporter: Ryanne Dolan
>            Assignee: Manikumar
>            Priority: Major
>              Labels: pull-request-available, ready-to-commit
>             Fix For: 2.4.0
>
>         Attachments: Active-Active XDCR setup.png
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to