Re: Connect-Mirror Error

2020-05-02 Thread mandeep gandhi
Hi,


I looked at the code[0] and looks like you need to specify bootstrap
servers for both source and target. Did you happen to try this as well?

* {
* "name": "MirrorSourceConnector",
* "connector.class":
"org.apache.kafka.connect.mirror.MirrorSourceConnector",
* "replication.factor": "1",
* "source.cluster.alias": "backup",
* "target.cluster.alias": "primary",
* "source.cluster.bootstrap.servers": "vip1:9092",
* "target.cluster.bootstrap.servers": "vip2:9092",
* "topics": ".*test-topic-.*",
* "groups": "consumer-group-.*",
* "emit.checkpoints.interval.seconds": "1",
* "emit.heartbeats.interval.seconds": "1",
* "sync.topic.acls.enabled": "false"
* } [0] -
https://github.com/apache/kafka/blob/8fd967e10cdfbb6e48ef1f590b8902bbf1080a71/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java#L279

On Sat, May 2, 2020 at 8:52 PM vishnu murali 
wrote:

> Hey Guys,
>
> Here By i am posting stack trace occured in the connect-distributed while
> giving mirror connector configurations:
>
> *Post*:http://localhost:8083/connectors
>
> *Request json Body:*
> {
> "name": "us-west-sourc",
> "config": {
> "connector.class":
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",
> "source.cluster.alias": "cluster 9092",
> "target.cluster.alias": "cluster 9091",
> "source.cluster.bootstrap.servers": "localhost:9092",
> "topics": "vis-city"
>
> }
> }
>
> it is saying as bootstrap.servers which has default value is missing?
>
> even if i tried that config in request also it is responding the same...
>
> what may be the problem?
>
> Actually i am trying to copy data from topic in one cluster to another
> cluster..
>
> 1)By using MirrorMaker command it is possible.
>
> But i want to do the same using connector by giving request and then copy
> the data?
>
> What change i need to do?
>
>
> [2020-05-02 20:40:43,304] ERROR WorkerConnector{id=us-west-sourc} Error
> while starting connector (org.apache.kafka.connect.runtime.WorkerConnector)
> org.apache.kafka.common.config.ConfigException: Missing required
> configuration "bootstrap.servers" which has no default value.
> at
> org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:477)
> at
> org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
> at
>
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:108)
> at
>
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:142)
> at
>
> org.apache.kafka.clients.admin.AdminClientConfig.(AdminClientConfig.java:216)
> at org.apache.kafka.clients.admin.Admin.create(Admin.java:71)
> at
> org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
> at
>
> org.apache.kafka.connect.mirror.MirrorSourceConnector.start(MirrorSourceConnector.java:115)
> at
>
> org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110)
> at
>
> org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135)
> at
>
> org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)
> at
> org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:257)
> at
>
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
> at
>
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
> at
>
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
> at
>
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
> at
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:830)
>


Connect-Mirror Error

2020-05-02 Thread vishnu murali
Hey Guys,

Here By i am posting stack trace occured in the connect-distributed while
giving mirror connector configurations:

*Post*:http://localhost:8083/connectors

*Request json Body:*
{
"name": "us-west-sourc",
"config": {
"connector.class":
"org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "cluster 9092",
"target.cluster.alias": "cluster 9091",
"source.cluster.bootstrap.servers": "localhost:9092",
"topics": "vis-city"

}
}

it is saying as bootstrap.servers which has default value is missing?

even if i tried that config in request also it is responding the same...

what may be the problem?

Actually i am trying to copy data from topic in one cluster to another
cluster..

1)By using MirrorMaker command it is possible.

But i want to do the same using connector by giving request and then copy
the data?

What change i need to do?


[2020-05-02 20:40:43,304] ERROR WorkerConnector{id=us-west-sourc} Error
while starting connector (org.apache.kafka.connect.runtime.WorkerConnector)
org.apache.kafka.common.config.ConfigException: Missing required
configuration "bootstrap.servers" which has no default value.
at
org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:477)
at
org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)
at
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:108)
at
org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:142)
at
org.apache.kafka.clients.admin.AdminClientConfig.(AdminClientConfig.java:216)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:71)
at
org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49)
at
org.apache.kafka.connect.mirror.MirrorSourceConnector.start(MirrorSourceConnector.java:115)
at
org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110)
at
org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135)
at
org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)
at
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:257)
at
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
at
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
at
org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
at
org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)


Re: Connect-Mirror 2.5.0

2020-05-02 Thread vishnu murali
Hi Liam,

Thanks for the reply,

In that I want that connectors type of execution..

Here
PUT /connectors/us-west-source/config HTTP/1.1

{
"name": "us-west-source",
"connector.class":
"org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "us-west",
"target.cluster.alias": "us-east",
"source.cluster.bootstrap.servers": "us-west-host1:9091",
"topics": ".*"
}

Source I can able to mention in source.cluster.bootstrap.servers


So where I need to config the Target cluster bootstrap server address ?


For example source is localhost:9092
Target is localhost:8092

So where I need to mention Target


On Sat, May 2, 2020, 19:13 Liam Clarke-Hutchinson 
wrote:

> Hi Vishnu,
>
> As per my earlier email:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-Walkthrough:RunningMirrorMaker2.0
>
> In the same vein, any questions, hit me up,
>
> Liam Clarke-Hutchinson
>
> On Sat, May 2, 2020 at 9:56 PM vishnu murali 
> wrote:
>
> > Hey Guys
> >
> > I am using Apache version of 2.5
> >
> > Correct me if I am wrong!!
> >
> > Here there is a jar file called Connect-Mirror2.5.0 in the libs folder.I
> > think it is a connector  to  copy the topic data between one cluster to
> > another cluster like MirrorMaker..
> >
> > So I started zookeeper
> > I started Kafka server
> > I started connect-distributed
> >
> >
> > So what are the json configurations to give in the Post Request to make
> > that connector work..
> >
> > How can I mention source ,destination clusters whitelist topics in that
> > configuration file and process?
> >
>


Re: kafka rdd save to hive errer

2020-05-02 Thread Liam Clarke-Hutchinson
E.g., as per
https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html

val df = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
  .option("subscribe", "topic1")
  .option("startingOffsets", "earliest")
  .option("endingOffsets", "latest")
  .load()


On Sun, May 3, 2020 at 1:50 AM Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:

> Hello 姜戎 ,
>
> Unfortunately there's not enough information in your email for us to help
> you. Are you trying to use Spark Batch to read from Kafka? Have you tried
> setting "endingOffsets" to "latest" instead of an arbitrary number?
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
>
> On Fri, May 1, 2020 at 2:36 AM 姜戎 <215979...@qq.com> wrote:
>
>> failed to get records for compacted ...after polling for12
>> partition 0 offset min=0 max=1427265
>> get offsetrange 0 until 50 to make rdd
>
>


Re: kafka rdd save to hive errer

2020-05-02 Thread Liam Clarke-Hutchinson
Hello 姜戎 ,

Unfortunately there's not enough information in your email for us to help
you. Are you trying to use Spark Batch to read from Kafka? Have you tried
setting "endingOffsets" to "latest" instead of an arbitrary number?

Kind regards,

Liam Clarke-Hutchinson


On Fri, May 1, 2020 at 2:36 AM 姜戎 <215979...@qq.com> wrote:

> failed to get records for compacted ...after polling for12
> partition 0 offset min=0 max=1427265
> get offsetrange 0 until 50 to make rdd


Re: Kafka: Messages disappearing from topics, largestTime=0

2020-05-02 Thread Liam Clarke-Hutchinson
Good luck JP, do try it with the volume switching commented out, and see
how it goes.

On Fri, May 1, 2020 at 6:50 PM JP MB  wrote:

> Thank you very much for the help anyway.
>
> Best regards
>
> On Fri, May 1, 2020, 00:54 Liam Clarke-Hutchinson <
> liam.cla...@adscale.co.nz>
> wrote:
>
> > So the logs show a healthy shutdown, so we can eliminate that as an
> issue.
> > I would look next at the volume management during a rollout based on the
> > other error messages you had earlier about permission denied etc. It's
> > possible there's some journalled but not flushed changes in those time
> > indexes, but at this point we're getting into filesystem internals which
> > aren't my forte. But if you can temporarily disable the volume switching
> > and do a test roll out, see if you get the same problems or not, would
> help
> > eliminate it or confirm it.
> >
> > Sorry I can't help further on that.
> >
> > On Fri, May 1, 2020 at 5:34 AM JP MB  wrote:
> >
> > > I took a bit because I needed logs of the server shutting down when
> this
> > > occurs. Here they are, I can see some errors:
> > > https://gist.github.com/josebrandao13/e8b82469d3e9ad91fbf38cf139b5a726
> > >
> > > Regarding systemd, the closest I could find to TimeoutStopSec was
> > > DefaultTimeoutStopUSec=1min 30s that looks to be 90seconds. I could not
> > > find any KillSignal or RestartKillSignal. You can see the output of
> > > systemctl show --all here:
> > > https://gist.github.com/josebrandao13/f2dd646fab19b19f127981fce92d78c4
> > >
> > > Once again, thanks for the help.
> > >
> > > Em qui., 30 de abr. de 2020 às 15:04, Liam Clarke-Hutchinson <
> > > liam.cla...@adscale.co.nz> escreveu:
> > >
> > > > I'd also suggest eyeballing your systemd conf to verify that someone
> > > hasn't
> > > > set a very low TimeoutStopSec, or that KillSignal/RestartKillSignal
> > > haven't
> > > > been configured to SIGKILL (confusingly named, imo, as the default
> for
> > > > KillSignal is SIGTERM).
> > > >
> > > > Also, the Kafka broker logs at shutdown look very different if it
> shut
> > > down
> > > > currently vs if it didn't. Could you perhaps put them in a Gist and
> > email
> > > > the link?
> > > >
> > > > Just trying to make sure basic assumptions are holding :)
> > > >
> > > > On Fri, 1 May 2020, 1:21 am JP MB, 
> wrote:
> > > >
> > > > > Hi,
> > > > > It's quite a complex script generated with ansible where we use a/b
> > > > > deployment and honestly, I don't have full knowledge on it I can
> > share
> > > > the
> > > > > general guidelines of what is done:
> > > > >
> > > > > > - Any old volumes (from previous releases are removed) (named
> with
> > > > suffix
> > > > > > '-old')
> > > > > > - Detach the volumes attached to the old host
> > > > > > - Stop the service in the old host - uses systemctl stop kafka
> > > > > > - Attempt to create a CNAME volume: this is a volume with the
> same
> > > name
> > > > > > that will be attached to the new box. Except for very first run,
> > this
> > > > > task
> > > > > > is used to get the information about the existing volume. (no
> > sufix)
> > > > > > - A new volume is created as copy of the CNAME volume (named with
> > > > suffix
> > > > > > '-new')
> > > > > > - The new volume is attached to the host/vm (named with suffix
> > > '-new')
> > > > > > - The new volume is formated (except for very first run, its
> > already
> > > > > > formated)(named with suffix '-new')
> > > > > > - The new volume is mounted (named with suffix '-new')
> > > > > > - Start the service in the new host - uses systemctl start kafka
> > > > > > - If everthing went well stopping/starting services:
> > > > > >- The volume no the old host is renamed with prefix '-old'.
> > > > > >- The new volume is renamed stripping the suffix '-new'.
> > > > >
> > > > >
> > > > > I made a new experiment today with some interesting findings. Had
> 518
> > > > > messages in a given topic, after a deployment lost 9 due to this
> > > problem
> > > > in
> > > > > partitions 13,15,16 and 17. All the errors I could find in the time
> > > > > index files before the deployment (left is partition number):
> > > > >
> > > > > 11 -> timestamp mismatch on 685803 - offsets from 685801 to 685805,
> > no
> > > > > > message loss here
> > > > > > 12 -> -1 error no indexes on the log - base segment was the last
> > > offset
> > > > > so
> > > > > > ok
> > > > > > 13 -> timestamp mismatch error on 823168 - offsets from 323168 to
> > > > 823172,
> > > > > > four messages lost
> > > > > > 14 -> timestamp mismatch on 619257 - offsets from 619253 to
> 619258,
> > > no
> > > > > > message loss here
> > > > > > 15 -> timestamp mismatch on 658783 - offsets from 658783 to
> 658784,
> > > one
> > > > > > message missing
> > > > > > 16 -> timestamp mismatch on 623508 - offsets from 623508 to
> 623509,
> > > one
> > > > > > message missing
> > > > > > 17 -> timestamp mismatch on 515479 - offsets from 515479 to
> 515481,
> > > two
> > > > > > messages missing
> > > > 

Re: Connect-Mirror 2.5.0

2020-05-02 Thread Liam Clarke-Hutchinson
Hi Vishnu,

As per my earlier email:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-Walkthrough:RunningMirrorMaker2.0

In the same vein, any questions, hit me up,

Liam Clarke-Hutchinson

On Sat, May 2, 2020 at 9:56 PM vishnu murali 
wrote:

> Hey Guys
>
> I am using Apache version of 2.5
>
> Correct me if I am wrong!!
>
> Here there is a jar file called Connect-Mirror2.5.0 in the libs folder.I
> think it is a connector  to  copy the topic data between one cluster to
> another cluster like MirrorMaker..
>
> So I started zookeeper
> I started Kafka server
> I started connect-distributed
>
>
> So what are the json configurations to give in the Post Request to make
> that connector work..
>
> How can I mention source ,destination clusters whitelist topics in that
> configuration file and process?
>


Re: Connector For MirrorMaker

2020-05-02 Thread Liam Clarke-Hutchinson
Hi Vishnu,

You should look at this:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-Walkthrough:RunningMirrorMaker2.0

Assuming you've downloaded/checked out Kafka 2.5, you should have
connect-mirror-maker.sh under the bin directory.

Good luck, any questions, let me know,

Liam Clarke-Hutchinson

On Sat, May 2, 2020 at 2:05 AM vishnu murali 
wrote:

> Hi Robin
>
> I am using Apache Kafka there is service called kafka-mirror-maker.bat with
> the consumer and producer properties to copy topic from one cluster to
> another.
>
> I want to do that by using connector..
>
> I didn't aware anything about MirrorMaker 2 and I didn't know how to
> download and configure with Apache Kafka..
>
> Can u guide me how to start with Mirror Maker 2 connector ?
>
> On Fri, May 1, 2020, 19:13 Robin Moffatt  wrote:
>
> > Are you talking about MirrorMaker 2? That runs as a connector.
> >
> > If not, perhaps you can clarify your question a bit as to what it is you
> > are looking for.
> >
> >
> > --
> >
> > Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
> >
> >
> > On Fri, 1 May 2020 at 13:57, vishnu murali 
> > wrote:
> >
> > > Hi Guys
> > >
> > > Previously I asked question about the Mirror maker and it is solved
> now.
> > >
> > > So Now I need to know is there any connectors available for that same.
> > >
> > > Like JdbcConnector acts as a source and sink for DB connection is there
> > any
> > > connector available for performing mirror operations
> > >
> > >  or
> > >
> > > does some one having own created connectors for this purpose??
> > >
> >
>


Re: Consumer not receiving messages after connection loss

2020-05-02 Thread Liam Clarke-Hutchinson
Hi Goran,

Glad you figured it out :) And interesting that there was nothing in the
server logs (as far as I can tell, it's a bit hard to read) which showed
why the server was terminating the connection. Just wanted to provide
feedback that your second attempt at sending the logs was really hard to
read also. Could I suggest something like pastebin.com or gist.github.com
for future issues? They keep the formatting intact and make it easier to
delve into your logs.

Cheers,

Liam Clarke-Hutchinson

On Tue, Apr 28, 2020 at 5:59 AM Goran Sliskovic 
wrote:

>  Apparently the issue is the fact that client tries to fetch more bytes
> than network can handle in response timeout (request.timeout.ms client
> property), So in my case request could not finish in 30 seconds, gets
> cancelled an reissued. An it fail again and agian. So either lowering
> max.partion.fetch,bytes or increasing request.timeout.ms should allow
> kafka client to consume under low speed condition. Diagnostic would be nice
> in the kafka log though, at least at the info level..
>
> .
> On Sunday, April 26, 2020, 01:41:34 PM GMT+2, Goran Sliskovic
>  wrote:
>
>   Sorry, apparently logs get formatted to unreadable state. Attached this
> time.
>
>
> On Sunday, April 26, 2020, 01:35:38 PM GMT+2, Goran Sliskovic
>  wrote:
>
>  Hi,
> I'm tracking an issue in production system (data flow stops suddenly for
> hours). That seems to bi triggered by network errors, however communication
> is not restored for hours event though networking is restored.I have
> managed to put test system in similar state. Symptoms:
> Client (java application) is apparently connected, goes through the
> infinite poll loop but no records are returned. I can see by sendq from
> netstat (and confirm with wireshark) that subscription data is coming
> through socket.
> netstat (windows, java client):netstat -n | findstr 9092
>   TCP192.168.128.196:55759  192.168.128.89:9092ESTABLISHED  TCP
>   192.168.128.196:56069  192.168.128.89:9092ESTABLISHED
> Linux ( kafka_2.12-2.5.0):
> netstat -n | grep 9092 | grep 196
> tcp6   0  0 192.168.128.89:9092 192.168.128.196:55759
>  ESTABLISHEDtcp6   0 121040 192.168.128.89:9092
> 192.168.128.196:56069   ESTABLISHED
>
> Client logs:
>  2020-04-26 12:56:26,410 [Thread-0]  ConsumerConfig : ConsumerConfig
> values:  allow.auto.create.topics = true auto.commit.interval.ms = 1000
> auto.offset.reset = earliest bootstrap.servers = [192.168.128.89:9092]
> check.crcs = true client.dns.lookup = default client.id =  client.rack =
> connections.max.idle.ms = 54 default.api.timeout.ms = 6
> enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes =
> 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id =
> test_dummy_group4 group.instance.id = null heartbeat.interval.ms = 3000
> interceptor.classes = [] internal.leave.group.on.close = true
> isolation.level = read_uncommitted key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 12
> max.poll.records = 1 metadata.max.age.ms = 30 metric.reporters = []
> metrics.num.samples = 2 metrics.recording.level = INFO
> metrics.sample.window.ms = 3 partition.assignment.strategy = [class
> org.apache.kafka.clients.consumer.RangeAssignor] receive.buffer.bytes =
> 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50
> request.timeout.ms = 3 retry.backoff.ms = 100
> sasl.client.callback.handler.class = null sasl.jaas.config = null
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 6 sasl.kerberos.service.name
> = null sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.login.callback.handler.class = null sasl.login.class = null
> sasl.login.refresh.buffer.seconds = 300
> sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor
> = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI
> security.protocol = PLAINTEXT security.providers = null send.buffer.bytes =
> 131072 session.timeout.ms = 3 ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2] ssl.endpoint.identification.algorithm =
> https ssl.key.password = null ssl.keymanager.algorithm = SunX509
> ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type
> = JKS ssl.protocol = TLSv1.2 ssl.provider = null
> ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX
> ssl.truststore.location = null ssl.truststore.password = null
> ssl.truststore.type = JKS value.deserializer = class
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> INFO  2020-04-26 12:56:27,158 [Thread-0]  AppInfoParser : Kafka version:
> 2.5.0INFO  2020-04-26 12:56:27,158 [Thread-0]  AppInfoParser : Kafka
> commitId: 66563e712b0b9f84INFO  2020-04-26 12:56:27,158 [Thread-0]
> AppInfoParser : Kafka startTimeMs: 1587898587150INFO  2020-04-26
> 

Connect-Mirror 2.5.0

2020-05-02 Thread vishnu murali
Hey Guys

I am using Apache version of 2.5

Correct me if I am wrong!!

Here there is a jar file called Connect-Mirror2.5.0 in the libs folder.I
think it is a connector  to  copy the topic data between one cluster to
another cluster like MirrorMaker..

So I started zookeeper
I started Kafka server
I started connect-distributed


So what are the json configurations to give in the Post Request to make
that connector work..

How can I mention source ,destination clusters whitelist topics in that
configuration file and process?