Re: [VOTE] KIP-382 MirrorMaker 2.0

2018-12-21 Thread Harsha
+1 (binding).  Nice work Ryan.
-Harsha

On Fri, Dec 21, 2018, at 8:14 AM, Andrew Schofield wrote:
> +1 (non-binding)
> 
> Andrew Schofield
> IBM Event Streams
> 
> On 21/12/2018, 01:23, "Srinivas Reddy"  wrote:
> 
> +1 (non binding)
> 
> Thank you Ryan for the KIP, let me know if you need support in 
> implementing
> it.
> 
> -
> Srinivas
> 
> - Typed on tiny keys. pls ignore typos.{mobile app}
> 
> 
> On Fri, 21 Dec, 2018, 08:26 Ryanne Dolan  
> > Thanks for the votes so far!
> >
> > Due to recent discussions, I've removed the high-level REST API from the
> > KIP.
> >
> > On Thu, Dec 20, 2018 at 12:42 PM Paul Davidson 
> 
> > wrote:
> >
> > > +1
> > >
> > > Would be great to see the community build on the basic approach we 
> took
> > > with Mirus. Thanks Ryanne.
> > >
> > > On Thu, Dec 20, 2018 at 9:01 AM Andrew Psaltis 
>  > >
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Really looking forward to this and to helping in any way I can. 
> Thanks
> > > for
> > > > kicking this off Ryanne.
> > > >
> > > > On Thu, Dec 20, 2018 at 10:18 PM Andrew Otto 
> > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > This looks like a huge project! Wikimedia would be very excited to
> > have
> > > > > this. Thanks!
> > > > >
> > > > > On Thu, Dec 20, 2018 at 9:52 AM Ryanne Dolan 
> 
> > > > > wrote:
> > > > >
> > > > > > Hey y'all, please vote to adopt KIP-382 by replying +1 to this
> > > thread.
> > > > > >
> > > > > > For your reference, here are the highlights of the proposal:
> > > > > >
> > > > > > - Leverages the Kafka Connect framework and ecosystem.
> > > > > > - Includes both source and sink connectors.
> > > > > > - Includes a high-level driver that manages connectors in a
> > dedicated
> > > > > > cluster.
> > > > > > - High-level REST API abstracts over connectors between multiple
> > > Kafka
> > > > > > clusters.
> > > > > > - Detects new topics, partitions.
> > > > > > - Automatically syncs topic configuration between clusters.
> > > > > > - Manages downstream topic ACL.
> > > > > > - Supports "active/active" cluster pairs, as well as any number 
> of
> > > > active
> > > > > > clusters.
> > > > > > - Supports cross-data center replication, aggregation, and other
> > > > complex
> > > > > > topologies.
> > > > > > - Provides new metrics including end-to-end replication latency
> > > across
> > > > > > multiple data centers/clusters.
> > > > > > - Emits offsets required to migrate consumers between clusters.
> > > > > > - Tooling for offset translation.
> > > > > > - MirrorMaker-compatible legacy mode.
> > > > > >
> > > > > > Thanks, and happy holidays!
> > > > > > Ryanne
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Paul Davidson
> > > Principal Engineer, Ajna Team
> > > Big Data & Monitoring
> > >
> >
> 
> 


Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-21 Thread Boyang Chen
Thanks Richard for proposing this feature! We also have encountered some 
similar feature request that we want to define a generic async processing 
API.

However I guess the motivation here is that we should skip big records during 
normal processing, or let a separate task handle those records who takes P99 
processing time. Since my feeling is that if some edge cases happen, could we 
just skip the bad record and continue processing next record?

Also I want to understand what kind of ordering guarantee we are gonna provide 
with this new API, or there is no ordering guarantee at all?  Could we discuss 
any potential issues if consumer needs to process out-of-order messages?

Best,
Boyang

From: Richard Yu 
Sent: Saturday, December 22, 2018 2:00 AM
To: dev@kafka.apache.org
Subject: KIP-408: Add Asynchronous Processing to Kafka Streams

Hi all,

Lately, there has been considerable interest in adding asynchronous
processing to Kafka Streams.
Here is the KIP for such an addition:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-408%3A+Add+Asynchronous+Processing+To+Kafka+Streams

I wish to discuss the best ways to approach this problem.

Thanks,
Richard Yu


Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-21 Thread Richard Yu
Hi Boyang,

Thanks for pointing out the possibility of skipping bad records (never
crossed my mind). I suppose we could make it an option for the user if they
could skip a bad record. It was never the intention of this KIP though on
whether or not to do that. I could log a JIRA on such an issue, but I think
this is out of the KIP's scope.

As for the ordering guarantees, if you are using the standard Kafka design
of one thread per task. Then everything will pretty much remain the same.
However, if we are talking about using multiple threads per task (which is
something that this KIP proposes), then we should probably expect the
behavior to be somewhat similar to Samza's Async Task as stated in the JIRA
for this KIP (second-last comment).
Ordering would no longer be possible (so yeah, basically no guarantee at
all).

And how the user handles out-of-order messages is not something I'm well
versed in. I guess they can try to put the messages back in order some time
later on. But I honestly don't know what they will do.
It would be good if you could give me some insight into this.

Cheers,
Richard


On Fri, Dec 21, 2018 at 4:24 PM Boyang Chen  wrote:

> Thanks Richard for proposing this feature! We also have encountered some
> similar feature request that we want to define a generic async processing
> API.
>
> However I guess the motivation here is that we should skip big records
> during normal processing, or let a separate task handle those records who
> takes P99 processing time. Since my feeling is that if some edge cases
> happen, could we just skip the bad record and continue processing next
> record?
>
> Also I want to understand what kind of ordering guarantee we are gonna
> provide with this new API, or there is no ordering guarantee at all?  Could
> we discuss any potential issues if consumer needs to process out-of-order
> messages?
>
> Best,
> Boyang
> 
> From: Richard Yu 
> Sent: Saturday, December 22, 2018 2:00 AM
> To: dev@kafka.apache.org
> Subject: KIP-408: Add Asynchronous Processing to Kafka Streams
>
> Hi all,
>
> Lately, there has been considerable interest in adding asynchronous
> processing to Kafka Streams.
> Here is the KIP for such an addition:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-408%3A+Add+Asynchronous+Processing+To+Kafka+Streams
>
> I wish to discuss the best ways to approach this problem.
>
> Thanks,
> Richard Yu
>


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-12-21 Thread Sönke Liebau
Hi Ryanne,

just to briefly check in, am I understanding your mail correctly, that
you want to pick up the "multi-cluster/herder/worker features" in a
different KIP at some time? If yes, please feel free to let me know if
I can provide any help on that front. Otherwise, I am also happy to
draft a proposal as basis for discussion.

Best regards,
Sönke

On Fri, Dec 21, 2018 at 1:11 AM Ryanne Dolan  wrote:
>
> Jun, let's leave the REST API out of the KIP then.
>
> I have been arguing that Connect wouldn't benefit from the 
> multi-cluster/herder/worker features we need in MM2, and that the effort 
> would result in a needlessly complex Connect REST API. But certainly two 
> separate APIs is inherently more complex than a single API. If we can add 
> these features to Connect itself without breaking things, I'm onboard. I have 
> some ideas on this front, but that's for another KIP :)
>
> The REST API is non-essential for a MirrorMaker replacement, and I can easily 
> divorce that from the high-level driver. We still want to support running MM 
> without an existing Connect cluster, but we don't really need a REST API to 
> do that. Legacy MirrorMaker doesn't have a REST API after all. For 
> organizations that want on-the-fly configuration of their replication flows, 
> there's Connect.
>
> This has been brought up by nearly everyone, so I'm happy to oblige.
>
> Ryanne
>


-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


Re: [VOTE] KIP-382 MirrorMaker 2.0

2018-12-21 Thread Sönke Liebau
+1 (non-binding)

Thanks for your effort Ryanne!

On Fri, Dec 21, 2018 at 2:23 AM Srinivas Reddy
 wrote:
>
> +1 (non binding)
>
> Thank you Ryan for the KIP, let me know if you need support in implementing
> it.
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>
>
> On Fri, 21 Dec, 2018, 08:26 Ryanne Dolan 
> > Thanks for the votes so far!
> >
> > Due to recent discussions, I've removed the high-level REST API from the
> > KIP.
> >
> > On Thu, Dec 20, 2018 at 12:42 PM Paul Davidson 
> > wrote:
> >
> > > +1
> > >
> > > Would be great to see the community build on the basic approach we took
> > > with Mirus. Thanks Ryanne.
> > >
> > > On Thu, Dec 20, 2018 at 9:01 AM Andrew Psaltis  > >
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Really looking forward to this and to helping in any way I can. Thanks
> > > for
> > > > kicking this off Ryanne.
> > > >
> > > > On Thu, Dec 20, 2018 at 10:18 PM Andrew Otto 
> > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > This looks like a huge project! Wikimedia would be very excited to
> > have
> > > > > this. Thanks!
> > > > >
> > > > > On Thu, Dec 20, 2018 at 9:52 AM Ryanne Dolan 
> > > > > wrote:
> > > > >
> > > > > > Hey y'all, please vote to adopt KIP-382 by replying +1 to this
> > > thread.
> > > > > >
> > > > > > For your reference, here are the highlights of the proposal:
> > > > > >
> > > > > > - Leverages the Kafka Connect framework and ecosystem.
> > > > > > - Includes both source and sink connectors.
> > > > > > - Includes a high-level driver that manages connectors in a
> > dedicated
> > > > > > cluster.
> > > > > > - High-level REST API abstracts over connectors between multiple
> > > Kafka
> > > > > > clusters.
> > > > > > - Detects new topics, partitions.
> > > > > > - Automatically syncs topic configuration between clusters.
> > > > > > - Manages downstream topic ACL.
> > > > > > - Supports "active/active" cluster pairs, as well as any number of
> > > > active
> > > > > > clusters.
> > > > > > - Supports cross-data center replication, aggregation, and other
> > > > complex
> > > > > > topologies.
> > > > > > - Provides new metrics including end-to-end replication latency
> > > across
> > > > > > multiple data centers/clusters.
> > > > > > - Emits offsets required to migrate consumers between clusters.
> > > > > > - Tooling for offset translation.
> > > > > > - MirrorMaker-compatible legacy mode.
> > > > > >
> > > > > > Thanks, and happy holidays!
> > > > > > Ryanne
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Paul Davidson
> > > Principal Engineer, Ajna Team
> > > Big Data & Monitoring
> > >
> >



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[VOTE] KIP-398: Support reading trust store from classpath

2018-12-21 Thread Noa Resare
I’d like to call vote to my proposal KIP-398, about having the ability to read 
a trust store from classpath. This is a small but useful change that has the 
potential to benefit organisations that employ a private certificate 
infrastructure.

cheers
noa

Re: "version" in kafka-reassign-partitions's JSON input file is not updated by KIP-113

2018-12-21 Thread Attila Sasvári
Any ideas why version has not been changed of the reassignment json file?

If you generate it with  kafka-reassign-partitions --zookeeper $(hostname
-f):2181 --topics-to-move-json-file reassign.json --broker-list "26,27,28"
--generate, you can see something like this:

Current partition replica assignment
{"version":1,"partitions":[{"topic":"testTopic","partition":4,"replicas":[26,27,28],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":2,"replicas":[27,26,28],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":1,"replicas":[26,28,27],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":0,"replicas":[28,27,26],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":3,"replicas":[28,26,27],"log_dirs":["any","any","any"]}]}

Proposed partition reassignment configuration:
{"version":1,"partitions":[{"topic":"testTopic","partition":0,"replicas":[26,27,28],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":3,"replicas":[26,28,27],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":2,"replicas":[28,26,27],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":4,"replicas":[27,26,28],"log_dirs":["any","any","any"]},{"topic":"testTopic","partition":1,"replicas":[27,28,26],"log_dirs":["any","any","any"]}]}

Using an earlier Kafka version, it looked like this:
{"version":1,"partitions":[{"topic":"testTopic","partition":4,"replicas":[26,27,28]},{"topic":"testTopic","partition":2,"replicas":[27,26,28]},{"topic":"testTopic",
...
{"topic":"testTopic","partition":3,"replicas":[28,26,27]}]}

What do you think?


On Tue, Nov 20, 2018 at 4:48 PM Attila Sasvari
 wrote:
>
> Hi there,
>
> KIP-113 added a new, optional filed to the input JSON file of
> kafka-reassign-partitions:
>
> {
>   "version" : int,
>   "partitions" : [
> {
>   "topic" : str,
>   "partition" : int,
>   "replicas" : [int],
>   "log_dirs" : [str]<-- NEW. A log directory can be either "any",
> or a valid absolute path that begins with '/'. This is an optional filed.
> It is treated as an array of "any" if this field is not explicitly
> specified in the json file.
> },
> ...
>   ]
> }
> {code}
>
> KIP-113 says:
> This KIP is a pure addition. So there is no backward compatibility
concern.
>
> Is it intentional that "version" remained 1?
>
> Regards,
> Attila


Re: [VOTE] KIP-382 MirrorMaker 2.0

2018-12-21 Thread Eno Thereska
+1 (non-binding)

Great stuff!
Eno

On Fri, Dec 21, 2018 at 10:04 AM Sönke Liebau
 wrote:

> +1 (non-binding)
>
> Thanks for your effort Ryanne!
>
> On Fri, Dec 21, 2018 at 2:23 AM Srinivas Reddy
>  wrote:
> >
> > +1 (non binding)
> >
> > Thank you Ryan for the KIP, let me know if you need support in
> implementing
> > it.
> >
> > -
> > Srinivas
> >
> > - Typed on tiny keys. pls ignore typos.{mobile app}
> >
> >
> > On Fri, 21 Dec, 2018, 08:26 Ryanne Dolan  >
> > > Thanks for the votes so far!
> > >
> > > Due to recent discussions, I've removed the high-level REST API from
> the
> > > KIP.
> > >
> > > On Thu, Dec 20, 2018 at 12:42 PM Paul Davidson <
> pdavid...@salesforce.com>
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Would be great to see the community build on the basic approach we
> took
> > > > with Mirus. Thanks Ryanne.
> > > >
> > > > On Thu, Dec 20, 2018 at 9:01 AM Andrew Psaltis <
> psaltis.and...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > Really looking forward to this and to helping in any way I can.
> Thanks
> > > > for
> > > > > kicking this off Ryanne.
> > > > >
> > > > > On Thu, Dec 20, 2018 at 10:18 PM Andrew Otto 
> > > wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > This looks like a huge project! Wikimedia would be very excited
> to
> > > have
> > > > > > this. Thanks!
> > > > > >
> > > > > > On Thu, Dec 20, 2018 at 9:52 AM Ryanne Dolan <
> ryannedo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hey y'all, please vote to adopt KIP-382 by replying +1 to this
> > > > thread.
> > > > > > >
> > > > > > > For your reference, here are the highlights of the proposal:
> > > > > > >
> > > > > > > - Leverages the Kafka Connect framework and ecosystem.
> > > > > > > - Includes both source and sink connectors.
> > > > > > > - Includes a high-level driver that manages connectors in a
> > > dedicated
> > > > > > > cluster.
> > > > > > > - High-level REST API abstracts over connectors between
> multiple
> > > > Kafka
> > > > > > > clusters.
> > > > > > > - Detects new topics, partitions.
> > > > > > > - Automatically syncs topic configuration between clusters.
> > > > > > > - Manages downstream topic ACL.
> > > > > > > - Supports "active/active" cluster pairs, as well as any
> number of
> > > > > active
> > > > > > > clusters.
> > > > > > > - Supports cross-data center replication, aggregation, and
> other
> > > > > complex
> > > > > > > topologies.
> > > > > > > - Provides new metrics including end-to-end replication latency
> > > > across
> > > > > > > multiple data centers/clusters.
> > > > > > > - Emits offsets required to migrate consumers between clusters.
> > > > > > > - Tooling for offset translation.
> > > > > > > - MirrorMaker-compatible legacy mode.
> > > > > > >
> > > > > > > Thanks, and happy holidays!
> > > > > > > Ryanne
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Paul Davidson
> > > > Principal Engineer, Ajna Team
> > > > Big Data & Monitoring
> > > >
> > >
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>


[jira] [Created] (KAFKA-7765) IdleExpiryManager should not passively close socket used by controller

2018-12-21 Thread huxihx (JIRA)
huxihx created KAFKA-7765:
-

 Summary: IdleExpiryManager should not passively close socket used 
by controller
 Key: KAFKA-7765
 URL: https://issues.apache.org/jira/browse/KAFKA-7765
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 2.1.0
Reporter: huxihx


Currently, controller creates sockets for every living brokers without idle 
timeout. However, other brokers' processor threads still could close these 
sockets if no requests flow through them within `connections.max.idle.ms`.

Lots of CLOSE_WAITs were left when those sockets were closed by remote peer 
since controller's RequestSendThread will not check if they are closed by peer.

I think we need to figure out a way to record which channels should be 
maintained and have them excluded by IdleExpiryManager. A naive method is to 
augment KafkaChannel, making it have a field indicating whether this channel 
should be kept alive.

Does it make any sense?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3271

2018-12-21 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Switch anonymous classes to lambda expressions in 
tools module

[manikumar.reddy] MINOR: Hygiene fixes in KafkaFutureImpl (#5098)

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Resolved] (KAFKA-7054) Kafka describe command should throw topic doesn't exist exception.

2018-12-21 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7054.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5211
[https://github.com/apache/kafka/pull/5211]

> Kafka describe command should throw topic doesn't exist exception.
> --
>
> Key: KAFKA-7054
> URL: https://issues.apache.org/jira/browse/KAFKA-7054
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Manohar Vanam
>Priority: Minor
> Fix For: 2.2.0
>
>
> If topic doesn't exist then Kafka describe command should throw topic doesn't 
> exist exception.
> like alter and delete commands :
> {code:java}
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --delete 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:13,111] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.deleteTopic(TopicCommand.scala:184)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:71)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --alter 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:43,663] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:125)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:65)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Fail-fast builds?

2018-12-21 Thread David Arthur
Since this is a relatively simple change, I went ahead and opened up a PR
here https://github.com/apache/kafka/pull/6059

On Fri, Dec 21, 2018 at 2:15 AM Manikumar  wrote:

> +1 fo the suggestion.
>
> On Fri, Dec 21, 2018 at 2:38 AM David Arthur  wrote:
>
> > In the jenkins.sh file, we have the following comment:
> >
> > "In order to provide faster feedback, the tasks are ordered so that
> faster
> > tasks are executed in every module before slower tasks (if possible)"
> >
> >
> > but then we proceed to use the Gradle --continue flag. This means PRs
> won't
> > get notified of problems until the whole build finishes.
> >
> >
> > What do folks think about splitting the build invocation into a
> validation
> > step and a test step? The validation step would omit the continue flag,
> but
> > the test step would include it. This would allow for fast failure on
> > compilation and checkstyle problems, but let the whole test suite run in
> > spite of test failures.
> >
> >
> > Cheers,
> > David
> >
>


-- 
David Arthur


Re: [VOTE] [REMINDER] KIP-383 Pluggable interface for SSL Factory

2018-12-21 Thread Damian Guy
must be my gmail playing up. This appears to be the DISCUSS thread to me...

On Thu, 20 Dec 2018 at 18:42, Harsha  wrote:

> Damian,
>This is the VOTE thread. There is a DISCUSS thread which
> concluded in it.
>
> -Harsha
>
>
> On Wed, Dec 19, 2018, at 5:04 AM, Pellerin, Clement wrote:
> > I did that and nobody came.
> > https://lists.apache.org/list.html?dev@kafka.apache.org:lte=1M:kip-383
> > I don't understand why this feature is not more popular.
> > It's the solution to one Jira and a work-around for a handful more Jiras.
> >
> > -Original Message-
> > From: Damian Guy [mailto:damian@gmail.com]
> > Sent: Wednesday, December 19, 2018 7:38 AM
> > To: dev
> > Subject: Re: [VOTE] [REMINDER] KIP-383 Pluggable interface for SSL
> Factory
> >
> > Hi Clement,
> >
> > You should start a separate thread for the vote, i.e., one with a subject
> > of [VOTE] KIP-383 ...
> >
> > Looks like you haven't done this?
>


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-12-21 Thread Ryanne Dolan
Sönke, I can probably get a KIP together in the next several weeks, but
you're welcome to beat me to it :)

Ryanne

On Fri, Dec 21, 2018, 3:59 AM Sönke Liebau  Hi Ryanne,
>
> just to briefly check in, am I understanding your mail correctly, that
> you want to pick up the "multi-cluster/herder/worker features" in a
> different KIP at some time? If yes, please feel free to let me know if
> I can provide any help on that front. Otherwise, I am also happy to
> draft a proposal as basis for discussion.
>
> Best regards,
> Sönke
>
> On Fri, Dec 21, 2018 at 1:11 AM Ryanne Dolan 
> wrote:
> >
> > Jun, let's leave the REST API out of the KIP then.
> >
> > I have been arguing that Connect wouldn't benefit from the
> multi-cluster/herder/worker features we need in MM2, and that the effort
> would result in a needlessly complex Connect REST API. But certainly two
> separate APIs is inherently more complex than a single API. If we can add
> these features to Connect itself without breaking things, I'm onboard. I
> have some ideas on this front, but that's for another KIP :)
> >
> > The REST API is non-essential for a MirrorMaker replacement, and I can
> easily divorce that from the high-level driver. We still want to support
> running MM without an existing Connect cluster, but we don't really need a
> REST API to do that. Legacy MirrorMaker doesn't have a REST API after all.
> For organizations that want on-the-fly configuration of their replication
> flows, there's Connect.
> >
> > This has been brought up by nearly everyone, so I'm happy to oblige.
> >
> > Ryanne
> >
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>


[jira] [Created] (KAFKA-7766) Improve fail-fast behavior of Jenkins build

2018-12-21 Thread David Arthur (JIRA)
David Arthur created KAFKA-7766:
---

 Summary: Improve fail-fast behavior of Jenkins build
 Key: KAFKA-7766
 URL: https://issues.apache.org/jira/browse/KAFKA-7766
 Project: Kafka
  Issue Type: Task
  Components: build
Reporter: David Arthur
Assignee: David Arthur


Split the Jenkins build into two gradle invocations: one for quick validation 
checks, and one to run the full test suite. 

If something in the validation step fails, it should fail the build 
immediately. However, if a test fails we want to continue and run all the tests 
(to gather all the failing tests).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Fail-fast builds?

2018-12-21 Thread Satish Duggana
>>This would allow for fast failure on
compilation and checkstyle problems, but let the whole test suite run in
spite of test failures.

+1 for that as it will be very useful.

Thanks,
Satish.

On Fri, Dec 21, 2018 at 8:10 PM David Arthur  wrote:
>
> Since this is a relatively simple change, I went ahead and opened up a PR
> here https://github.com/apache/kafka/pull/6059
>
> On Fri, Dec 21, 2018 at 2:15 AM Manikumar  wrote:
>
> > +1 fo the suggestion.
> >
> > On Fri, Dec 21, 2018 at 2:38 AM David Arthur  wrote:
> >
> > > In the jenkins.sh file, we have the following comment:
> > >
> > > "In order to provide faster feedback, the tasks are ordered so that
> > faster
> > > tasks are executed in every module before slower tasks (if possible)"
> > >
> > >
> > > but then we proceed to use the Gradle --continue flag. This means PRs
> > won't
> > > get notified of problems until the whole build finishes.
> > >
> > >
> > > What do folks think about splitting the build invocation into a
> > validation
> > > step and a test step? The validation step would omit the continue flag,
> > but
> > > the test step would include it. This would allow for fast failure on
> > > compilation and checkstyle problems, but let the whole test suite run in
> > > spite of test failures.
> > >
> > >
> > > Cheers,
> > > David
> > >
> >
>
>
> --
> David Arthur


Re: Problem in CI for pull request

2018-12-21 Thread Colin McCabe
Try typing "retest this please" as a comment to the PR.

best,
Colin

On Wed, Nov 28, 2018, at 11:05, lk gen wrote:
> Hi,
> 
>   I made a pull request and it passed CI on JDK 11 but failed on JDK 8
> 
>   I think the JDK 8 error may not related to my commit but an environment
> problem on the CI
> 
>   How can I rerun the CI for my pull request ?
> 
>   The pull request is at
> https://github.com/apache/kafka/pull/5960
> 
> error states
> 
> *19:27:48* ERROR: H36 is offline; cannot locate JDK 1.8
> (latest)*19:27:48* ERROR: H36 is offline; cannot locate Gradle 4.8.1
> 
> 
> Thanks


Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-12-21 Thread Colin McCabe
Hi Sönke,

One path forward would be to forbid the new ACL types from being created until 
the inter-broker protocol had been upgraded.  We'd also have to figure out how 
the new ACLs were stored in ZooKeeper.  There are a bunch of proposals in this 
thread that could work for that-- I really hope we don't keep changing the ZK 
path each time there is a version bump.

best,
Colin


On Thu, Nov 29, 2018, at 14:25, Sönke Liebau wrote:
> This has been dormant for a while now, can I interest anybody in chiming in
> here?
> 
> I think we need to come up with an idea of how to handle changes to ACLs
> going forward, i.e. some sort of versioning scheme. Not necessarily what I
> proposed in my previous mail, but something.
> Currently this fairly simple change is stuck due to this being unsolved.
> 
> I am happy to move forward without addressing the larger issue (I think the
> issue raised by Colin is valid but could be mitigated in the release
> notes), but that would mean that the next KIP to touch ACLs would inherit
> the issue, which somehow doesn't seem right.
> 
> Looking forward to your input :)
> 
> Best regards,
> Sönke
> 
> On Tue, Jun 19, 2018 at 5:32 PM Sönke Liebau 
> wrote:
> 
> > Picking this back up, now that KIP-290 has been merged..
> >
> > As Colin mentioned in an earlier mail this change could create a
> > potential security issue if not all brokers are upgraded and a DENY
> > Acl based on an IP range is created, as old brokers won't match this
> > rule and still allow requests. As I stated earlier I am not sure
> > whether for this specific change this couldn't be handled via the
> > release notes (see also this comment [1] from Jun Rao on a similar
> > topic), but in principle I think some sort of versioning system around
> > ACLs would be useful. As seen in KIP-290 there were a few
> > complications around where to store ACLs. To avoid adding ever new
> > Zookeeper paths for future ACL changes a versioning system is probably
> > useful.
> >
> > @Andy: I've copied you directly in this mail, since you did a bulk of
> > the work around KIP-290 and mentioned potentially picking up the
> > follow up work, so I think your input would be very valuable here. Not
> > trying to shove extra work your way, I'm happy to contribute, but we'd
> > be touching a lot of the same areas I think.
> >
> > If we want to implement a versioning system for ACLs I see the
> > following todos (probably incomplete & missing something at the same
> > time):
> > 1. ensure that the current Authorizer doesn't pick up newer ACLs
> > 2. add a version marker to new ACLs
> > 3. change SimpleACLAuthorizer to know what version of ACLs it is
> > compatible with and only load ACLs of this / smaller version
> > 4. Decide how to handle if incompatible (newer version) ACLs are
> > present: log warning, fail broker startup, ...
> >
> >
> > Post-KIP-290 ACLs are stored in two places in Zookeeper:
> > /kafka-acl-extended   - for ACLs with wildcards in the resource
> > /kafka-acl   -  for literal ACLs without wildcards (i.e. * means * not
> > any character)
> >
> > To ensure 1 we probably need to move to a new directory once more,
> > call it /kafka-acl-extended-new for arguments sake. Any ACL stored
> > here would get a version number stored with it, and only
> > SimpleAuthorizers that actually know to look here would find these
> > ACLs and also know to check for a version number. I think Andy
> > mentioned moving the resource definition in the new ACL format to JSON
> > instead of simple string in a follow up PR, maybe these pieces of work
> > are best tackled together - and if a new znode can be avoided even
> > better.
> >
> > This would allow us to recognize situations where ACLs are defined
> > that not all Authorizers can understand, as those Authorizers would
> > notice that there are ACLs with a larger version than the one they
> > support (not applicable to legacy ACLs up until now). How we want to
> > treat this scenario is up for discussion, I think make it
> > configurable, as customers have different requirements around
> > security. Some would probably want to fail a broker that encounters
> > unknown ACLs so as to not create potential security risks t others
> > might be happy with just a warning in the logs. This should never
> > happen, if users fully upgrade their clusters before creating new ACLs
> > - but to counteract the situation that Colin described it would be
> > useful.
> >
> > Looking forward, a migration option might be added to the kafka-acl
> > tool to migrate all legacy ACLs once into the new structure once the
> > user is certain that no old brokers will come online again.
> >
> > If you think this sounds like a convoluted way to go about things ...
> > I agree :) But I couldn't come up with a better way yet.
> >
> > Any thoughts?
> >
> > Best regards,
> > Sönke
> >
> > [1] https://github.com/apache/kafka/pull/5079#pullrequestreview-124512689
> >
> > On Thu, May 3, 2018 at 10:57 PM, Sönke Liebau
> >  

Re: [DISCUSS] KIP-388 Add observer interface to record request and response

2018-12-21 Thread Colin McCabe
On Thu, Nov 29, 2018, at 01:15, Lincong Li wrote:
> Hi everyone,
> 
> Thanks for all feedback on this KIP. I have had some lengthy offline
> discussions with Dong, Joel and other insightful developers. I updated 
> KIP
> 388
> 
> and
> proposed a different way of recording each request and response. Here is 
> a
> summary of the change.
> 
> Instead of having interfaces as wrapper on AbstractRequest and
> AbstractResponse, I provided an interface on the Struct class which
> represents the Kafka protocol format ("wire format"). The interface is
> called ObservableStruct and it provides a set of getters that allow user to
> extract information from the internal Struct instance. Some other possible
> approaches are discussed in the KIP as well. But after lots of thinking, I
> think the currently proposed approach is the best one.
> 
> Why is this the best approach?
> 1. *It's the most general way* to intercept/observe each request/response
> with any type since each request/response must be materialized to a Struct
> instance at some point in their life cycle.
> 
> 2. *It's the easiest-to-maintain interface*. There is basically only one
> interface (ObservableStruct) and its implementation (ObservableStructImp)
> to maintain. Due to the fact that this interface essentially defines a set
> of ways to get field(s) from the Struct, that means even changes on the
> structure of the Structure (wire format changes) is not going to cause the
> interface to change.
> 
> 3. Struct represents the Kafka protocol format which is public. Expecting
> users to have knowledge of the format of the kind of request/response they
> are recording is reasonable. Moreover, *the proposed interfaces freeze the
> least amount of internal implementation details into public APIs by only
> exposing ways of extracting information on the Struct class*.
> 
> I am aware that adding this broker-side instrumentation would touch other
> sensitive modules and trigger many discussions on design trade-offs and
> etc. I appreciate all of your effort trying to make it happen and truly
> believe in the benefits it can bring to the community.
> 
> Thanks,
> Lincong Li

Hi Lincong,

Thanks for thinking through this problem.  It's a tough one!

In general, we would like to get rid of the Struct classes and just directly 
translate Java data structures to sequences of bytes (and vice versa).  The 
reason is that going from byte sequence to Struct to internal data structure is 
an extra step.  This extra step generates lots of garbage for the garbage 
collector to clean up.  This is especially true for large messages like 
FetchRequest.  In addition to the memory cost, this is extra code and extra 
operations that we don't really need to perform.

This issue has shown up a lot when we profile Kafka performance.  Since the 
amount of extra garbage scales with the number of partitions, it's especially 
bad in large enterprise clusters.

best,
Colin


> 
> 
> On Sun, Nov 18, 2018 at 9:35 PM Dong Lin  wrote:
> 
> > Hey Lincong,
> >
> > Thanks for the explanation. Here are my concern with the current proposal:
> >
> > 1) The current APIs of RequestInfo/ResponseInfo only provide byte and
> > count number of ProduceRequest/FetchRespnse. With these limited AIPs,
> > developers will likely have to create new KIP and make change in Apache
> > Kafka source code in order to implement more advanced observer plugin,
> > which would considerably reduces the extensibility and customizability of
> > observer plugins:
> >
> > Here are two use-cases that can be made possible if we can provide the raw
> > request/response to the observer interface:
> >
> > - Get the number of bytes produced per source host. This is doable if
> > plugin can get the original ProduceRequest, deserialize request into Kafka
> > messages, and parse messages based on the schema of the message payload.
> >
> > - Get the ApiVersion supported per producer/consumer IPs. In the future we
> > can add the version of client library in ApiVersionsRequest and observer
> > can monitor whether there is still client library that is using very old
> > version, and if so, what is their IP addresses.
> >
> > 2) It requires extra maintenance overhead for Apache Kafka developer to
> > maintain implementation of RequestInfo (e.g. bytes produced per topic),
> > which would not be necessary if we can just provide ProduceRequest to the
> > observer interface.
> >
> > 3) It is not clear why we need RequestInfo/ResponseInfo needs to be
> > interface rather than class. In general interface is needed when we expect
> > multiple different implementation of the interface. Can you provide some
> > idea why we need multiple implementations for RequestInfo?
> >
> >
> > Thanks,
> > Dong
> >
> >
> > On Sun, Nov 18, 2018 at 12:50 AM Lincong Li 
> > wrote:
> >
> >> Hi Dong and Patrick,
> >>
> >> Thank you very much 

KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-21 Thread Richard Yu
Hi all,

Lately, there has been considerable interest in adding asynchronous
processing to Kafka Streams.
Here is the KIP for such an addition:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-408%3A+Add+Asynchronous+Processing+To+Kafka+Streams

I wish to discuss the best ways to approach this problem.

Thanks,
Richard Yu


Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-12-21 Thread Mayuresh Gharat
Hi Boyang,

Regarding "However, we shall still attempt to remove the member static info
if the given `member.id` points to an existing `group.instance.id` upon
LeaveGroupRequest, because I could think of the possibility that in long
term we could want to add static membership leave group logic for more
fine-grained use cases."

> I think, there is some confusion here. I am probably not putting it
> right.
>
I agree, If a static member sends LeaveGroupRequest, it should be removed
> from the group.
>
Now getting back to downgrade of static membership to Dynamic membership,
> with the example described earlier  (copying it again for ease of reading)
> :
>

>>1. Lets say we have 4 consumers :  c1, c2, c3, c4 in the static group.
>>2. The group.instance.id for each of there are as follows :
>>   - c1 -> gc1, c2 -> gc2, c3 -> gc3, c4 -> gc4
>>3. The mapping on the GroupCordinator would be :
>>   - gc1 -> mc1, gc2 -> mc2, gc3 -> mc3, gc4 -> mc4, where mc1, mc2,
>>   mc3, mc4 are the randomly generated memberIds for c1, c2, c3, c4
>>   respectively, by the GroupCoordinator.
>>4. Now we do a restart to move the group to dynamic membership.
>>5. We bounce c1 first and it rejoins with UNKNOWN_MEMBERID (since we
>>don't persist the previously assigned memberId mc1 anywhere on the c1).
>>
> - We agree that there is no way to recognize that c1 was a part of the
> group, *earlier*.  If yes, the statement : "The dynamic member rejoins
> the group without `group.instance.id`. It will be accepted since it is a
> known member." is not necessarily true, right?
>


> - Now I *agree* with "However, we shall still attempt to remove the
> member static info if the given `member.id` points to an existing `
> group.instance.id` upon LeaveGroupRequest, because I could think of the
> possibility that in long term we could want to add static membership leave
> group logic for more fine-grained use cases."
>
But that would only happen if the GroupCoordinator allocates the same
> member.id (mc1) to the consumer c1, when it rejoins the group in step 5
> above as a dynamic member, which is very rare as it is randomly generated,
> but possible.
>


> - This raises another question, if the GroupCoordinator assigns a
> member.id (mc1~) to consumer c1 after step 5. It will join the group and
> rebalance and the group will become stable, eventually. Now the
> GroupCoordinator still maintains a mapping of  "group.instance.id ->
> member.id" (c1 -> gc1, c2 -> gc2, c3 -> gc3, c4 -> gc4) internally and
> after some time, it realizes that it has not received heartbeat from the
> consumer with "group.instance.id" = gc1. In that case, it will trigger
> another rebalance assuming that a static member has left the group (when
> actually it (c1) has not left the group but moved to dynamic membership).
> This can result in multiple rebalances as the same will happen for c2, c3,
> c4.
>

Thoughts ???
One thing, I can think of right now is to run :
removeMemberFromGroup(String groupId, list
groupInstanceIdsToRemove, RemoveMemberFromGroupOptions options)
with groupInstanceIdsToRemove =  once we have bounced
all the members in the group. This assumes that we will be able to complete
the bounces before the GroupCoordinator realizes that it has not received a
heartbeat for any of . This is tricky and error prone.
Will have to think more on this.

Thanks,

Mayuresh


Jenkins build is back to normal : kafka-trunk-jdk8 #3272

2018-12-21 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-382 MirrorMaker 2.0

2018-12-21 Thread Andrew Schofield
+1 (non-binding)

Andrew Schofield
IBM Event Streams

On 21/12/2018, 01:23, "Srinivas Reddy"  wrote:

+1 (non binding)

Thank you Ryan for the KIP, let me know if you need support in implementing
it.

-
Srinivas

- Typed on tiny keys. pls ignore typos.{mobile app}


On Fri, 21 Dec, 2018, 08:26 Ryanne Dolan  Thanks for the votes so far!
>
> Due to recent discussions, I've removed the high-level REST API from the
> KIP.
>
> On Thu, Dec 20, 2018 at 12:42 PM Paul Davidson 
> wrote:
>
> > +1
> >
> > Would be great to see the community build on the basic approach we took
> > with Mirus. Thanks Ryanne.
> >
> > On Thu, Dec 20, 2018 at 9:01 AM Andrew Psaltis  >
> > wrote:
> >
> > > +1
> > >
> > > Really looking forward to this and to helping in any way I can. Thanks
> > for
> > > kicking this off Ryanne.
> > >
> > > On Thu, Dec 20, 2018 at 10:18 PM Andrew Otto 
> wrote:
> > >
> > > > +1
> > > >
> > > > This looks like a huge project! Wikimedia would be very excited to
> have
> > > > this. Thanks!
> > > >
> > > > On Thu, Dec 20, 2018 at 9:52 AM Ryanne Dolan 
> > > > wrote:
> > > >
> > > > > Hey y'all, please vote to adopt KIP-382 by replying +1 to this
> > thread.
> > > > >
> > > > > For your reference, here are the highlights of the proposal:
> > > > >
> > > > > - Leverages the Kafka Connect framework and ecosystem.
> > > > > - Includes both source and sink connectors.
> > > > > - Includes a high-level driver that manages connectors in a
> dedicated
> > > > > cluster.
> > > > > - High-level REST API abstracts over connectors between multiple
> > Kafka
> > > > > clusters.
> > > > > - Detects new topics, partitions.
> > > > > - Automatically syncs topic configuration between clusters.
> > > > > - Manages downstream topic ACL.
> > > > > - Supports "active/active" cluster pairs, as well as any number of
> > > active
> > > > > clusters.
> > > > > - Supports cross-data center replication, aggregation, and other
> > > complex
> > > > > topologies.
> > > > > - Provides new metrics including end-to-end replication latency
> > across
> > > > > multiple data centers/clusters.
> > > > > - Emits offsets required to migrate consumers between clusters.
> > > > > - Tooling for offset translation.
> > > > > - MirrorMaker-compatible legacy mode.
> > > > >
> > > > > Thanks, and happy holidays!
> > > > > Ryanne
> > > > >
> > > >
> > >
> >
> >
> > --
> > Paul Davidson
> > Principal Engineer, Ajna Team
> > Big Data & Monitoring
> >
>