Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Ryanne Dolan
> Oh - got it, it checks the entire prefix, which seems obvious to me in
retrospect :)

Rhys, I've changed the wording to make this more clear, thanks for calling
it out.

Ryanne

On Tue, Oct 16, 2018 at 4:16 PM McCaig, Rhys 
wrote:

>
> > In your example, us-west.us-east.us-central.us-west.topic is an invalid
> > "remote topic" name because us-west appears twice. MM2 will not replicate
> > us-east.us-central.us-west.topic into us-west a second time, because the
> > source topic already has us-west in the prefix. This is what I mean by
> > "cycle detection" -- cyclical replication does not result in infinite
> > recursion.
>
> Oh - got it, it checks the entire prefix, which seems obvious to me in
> retrospect :)
>
> Rhys
>
>
> > On Oct 15, 2018, at 3:18 PM, Ryanne Dolan  wrote:
> >
> > Rhys, thanks for your enthusiasm!
> >
> > In your example, us-west.us-east.us-central.us-west.topic is an invalid
> > "remote topic" name because us-west appears twice. MM2 will not replicate
> > us-east.us-central.us-west.topic into us-west a second time, because the
> > source topic already has us-west in the prefix. This is what I mean by
> > "cycle detection" -- cyclical replication does not result in infinite
> > recursion.
> >
> > It's important to note that MM2 does NOT disallow these sort of cycles,
> it
> > just knows how to deal with them properly.
> >
> > Also notice this is done at the topic level, not per record. The records
> > don't need any special header or anything for this cycle detection
> > mechanism to work.
> >
> > Thanks!
> > Ryanne
> >
> > On Mon, Oct 15, 2018 at 3:40 PM McCaig, Rhys 
> > wrote:
> >
> >> Hi Ryanne,
> >>
> >> This KIP is fantastic. It provides a great vision for how MirrorMaker
> >> should evolve in the Kafka project.
> >>
> >> I have a question on cycle detection - In a scenario where I have 3
> >> clusters replicating between each other, it seems it may be easy to
> >> misconfigure the connectors if auto topic creation is turned on so that
> >> records become replicated to increasingly longer topic names (until the
> >> topic name limit is reached). Consider clusters us-west, us-central,
> >> us-east:
> >>
> >> us-west: topic
> >> us-central: us-west.topic
> >> us-east: us-central.us-west.topic
> >> us-west: us-east.us-central.us-west.topic
> >> us-central: us-west.us-east.us-central.us-west.topic
> >>
> >> I’m not sure whether this scenario would actually justify implementing
> >> additional measures to avoid such a configuration, rather than ensuring
> >> that the documentation is clear on how to avoid such scenarios - would
> be
> >> good to hear what others think on this.
> >>
> >> Excited to see the discussion on this one.
> >>
> >> Rhys
> >>
> >>> On Oct 15, 2018, at 9:16 AM, Ryanne Dolan 
> wrote:
> >>>
> >>> Hey y'all!
> >>>
> >>> Please take a look at KIP-382:
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> >>>
> >>> Thanks for your feedback and support.
> >>>
> >>> Ryanne
> >>
> >>
>
>


Re: [EXTERNAL] Incremental Cooperative Rebalancing

2018-10-16 Thread Ryanne Dolan
Konstantine, thanks for the explanation, makes sense.

Ryanne

On Tue, Oct 16, 2018, 1:51 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Matthias, Ryanne, Rhys, Guozhang, thank you all for your comments!
>
> Ryanne, to try to address your specific comments, let me start by saying
> that a key concept behind this proposal is the concept of overlapping
> 'communication' with 'computation', which is known for often reducing the
> overall cost (or latency if you prefer) of an operation that involves
> multiple processes compared to global barrier-type synchronization. Of
> course this does not rule out that there might be occasions where
> stop-the-world might incur smaller overall cost. But given that we'll
> always want to minimize communication and shuffling of resources and apply
> more sticky heuristics in assignments, I believe that such edge cases will
> be considerably fewer in practice.
>
> Specifically here, the goal is to exchange (by revoking and reassigning)
> only the necessary resources in the group and allow for the unaffected
> resources to continue being used. As mentioned in the motivation section,
> this is expected to have a positive effect in a number of use cases, for
> which stop-the-world is too strict. Given this key distinction between
> affected and unaffected resources (e.g. topic partitions, tasks in Kafka
> Connect etc) I anticipate that in most cases, even for resources that need
> to change hands, the overall rebalance phase will be faster (especially at
> larger scale) than it is today with all the processes participating in
> resource hand-off and re-assignment.
>
> -Konstantine
>
>
> On Fri, Oct 5, 2018 at 12:00 PM Guozhang Wang  wrote:
>
> > Hello Konstantine,
> >
> > Thanks for the great write-up! Here are a few quick comments I have about
> > the proposals:
> >
> > 1. For "Kubernetes process death" and "Rolling bounce" case, there is
> > another parallel work on KIP-345 [1] (cc'ed contributor) that is aimed to
> > mitigate these two issues, but it is relying on the fact that we can
> > disable sending "leave group" request immediately on shutting down.
> Ideally
> > if KIP-345 works well for these cases, then Simple Cooperative
> Rebalancing
> > itself along with KIP-345 should cover most of the scenarios we've
> > described in the wiki. In addition, Delayed / Incremental Imbalance
> > approach can be done incrementally on top of Simple approach, so
> execution
> > wise I'd suggest we start with the Simple approach and observe how well
> it
> > works in practice (especially with K8s etc frameworks) before deciding if
> > we should go further and implemented the more complicated ones.
> >
> > 2. For the "events" section, I think it may worth mentioning if there are
> > any new client / coordinator failure events that need to be addressed
> with
> > the new protocol, as we listed in the original design [2] [3]. For
> example,
> > what if the leader received different client or resource listings during
> > two consecutive rebalances?
> >
> > 3. It's worth mentioning what are the key ideas in the updated protocol:
> >
> > 3.a) In the original protocol we require every member to revoke every
> > resource before joining the group, which can then be used as the
> > "synchronization barrier" and hence it does not matter for clients to
> > receive assignment at different point in time; in the new protocol we do
> > not require members to revoke everything, but instead leveraging on the
> > leader who has the "global picture" to make sure that there are no
> > conflicts between those shared resources, a.k.a as the synchronization
> > barrier.
> > 3.b) The new fields in the Assigned / RevokedPartitions fields in the
> > responses are now "deltas" instead of "overwrites" to the consumers. Any
> > modules relying on it, e.g. Streams who relies on ConsumerCoordinator,
> > needs to adjust their code (PartitionAssignor) correspondingly to
> > incorporate this semantic changes.
> >
> > 4. I've added a child page under yours for illustrating the implications
> > for Streams on rebalance cost reduction [4], since for Streams one key
> > characteristics is that standby tasks exist to help with rebalance
> incurred
> > unavailability, and hence need to be considered upfront how Streams
> should
> > leverage on the new protocol along with standby tasks to achieve the
> better
> > operational goals during rebalances.
> >
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Reduce+multiple+consumer+rebalances+by+specifying+member+id
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal#KafkaClient-sideAssignmentProposal-CoordinatorStateMachine
> > [3]
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design#Kafka0.9ConsumerRewriteDesign-Interestingscenariostoconsider
> > [4]
> >
> >
> 

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-16 Thread Attila Sasvari
Congratulations Manikumar! Keep up the good work.

On Tue, Oct 16, 2018 at 12:30 AM Jungtaek Lim  wrote:

> Congrats Mani!
> On Tue, 16 Oct 2018 at 1:45 PM Abhimanyu Nagrath <
> abhimanyunagr...@gmail.com>
> wrote:
>
> > Congratulations Manikumar
> >
> > On Tue, Oct 16, 2018 at 10:09 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Congratulations Mani!
> > >
> > >
> > > On Fri, Oct 12, 2018 at 9:41 PM Colin McCabe 
> wrote:
> > > >
> > > > Congratulations, Manikumar!  Well done.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Fri, Oct 12, 2018, at 01:25, Edoardo Comar wrote:
> > > > > Well done Manikumar !
> > > > > --
> > > > >
> > > > > Edoardo Comar
> > > > >
> > > > > IBM Event Streams
> > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > From:   "Matthias J. Sax" 
> > > > > To: dev 
> > > > > Cc: users 
> > > > > Date:   11/10/2018 23:41
> > > > > Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy
> > > > >
> > > > >
> > > > >
> > > > > Congrats!
> > > > >
> > > > >
> > > > > On 10/11/18 2:31 PM, Yishun Guan wrote:
> > > > > > Congrats Manikumar!
> > > > > > On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
> > > > > >  wrote:
> > > > > >>
> > > > > >> Great news, congratulations Manikumar!!
> > > > > >>
> > > > > >> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian
> > > > > 
> > > > > >> wrote:
> > > > > >>
> > > > > >>> Congrats Manikumar!
> > > > > >>>
> > > > > >>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan <
> > > ryannedo...@gmail.com>
> > > > > >>> wrote:
> > > > > >>>
> > > > >  Bravo!
> > > > > 
> > > > >  On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma <
> ism...@juma.me.uk>
> > > > > wrote:
> > > > > 
> > > > > > Congratulations Manikumar! Thanks for your continued
> > > contributions.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson
> > > > > 
> > > > > > wrote:
> > > > > >
> > > > > >> Hi all,
> > > > > >>
> > > > > >> The PMC for Apache Kafka has invited Manikumar Reddy as a
> > > committer
> > > > > >>> and
> > > > > > we
> > > > > >> are
> > > > > >> pleased to announce that he has accepted!
> > > > > >>
> > > > > >> Manikumar has contributed 134 commits including significant
> > > work to
> > > > > >>> add
> > > > > >> support for delegation tokens in Kafka:
> > > > > >>
> > > > > >> KIP-48:
> > > > > >>
> > > > > >>
> > > > > >
> > > > > 
> > > > > >>>
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > > >
> > > > > >> KIP-249
> > > > > >> <
> > > > > >
> > > > > 
> > > > > >>>
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > > > >
> > > > > >>
> > > > > >> :
> > > > > >>
> > > > > >>
> > > > > >
> > > > > 
> > > > > >>>
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > > > >
> > > > > >>
> > > > > >> He has broad experience working with many of the core
> > > components in
> > > > >  Kafka
> > > > > >> and he has reviewed over 80 PRs. He has also made huge
> > progress
> > > > > > addressing
> > > > > >> some of our technical debt.
> > > > > >>
> > > > > >> We appreciate the contributions and we are looking forward
> to
> > > more.
> > > > > >> Congrats Manikumar!
> > > > > >>
> > > > > >> Jason, on behalf of the Apache Kafka PMC
> > > > > >>
> > > > > >
> > > > > 
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Sönke Liebau
> > > > > >> Partner
> > > > > >> Tel. +49 179 7940878
> > > > > >> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
> > > Germany
> > > > >
> > > > > [attachment "signature.asc" deleted by Edoardo Comar/UK/IBM]
> > > > >
> > > > >
> > > > > Unless stated otherwise above:
> > > > > IBM United Kingdom Limited - Registered in England and Wales with
> > > number
> > > > > 741598.
> > > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> > PO6
> > > 3AU
> > >
> >
>


-- 
-- 
Attila Sasvari
Software Engineer



[jira] [Resolved] (KAFKA-7512) java.lang.ClassCastException: java.util.Date cannot be cast to java.lang.Number

2018-10-16 Thread Robert Yokota (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-7512.
--
Resolution: Duplicate

> java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number
> ---
>
> Key: KAFKA-7512
> URL: https://issues.apache.org/jira/browse/KAFKA-7512
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Rohit Kumar Gupta
>Priority: Blocker
> Attachments: connect.out
>
>
> Steps:
> ~~
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null}]}'
> {"f1": "value1","f2": \{"null":null}}
> {"f1": "value1","f2": \{"long":1022}}
>  
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null},\{"name":"f3","type":"string","default":"green"}]}'
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"long":12343536}
> ,"f3":"tutu"}
>  
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null}]}'
> {"f1": "value1","f2": \{"null":null}}
> {"f1": "value1","f2": \{"long":1022}}
>  
> bash-4.2# curl -X POST -H "Accept: application/json" -H "Content-Type: 
> application/json" http://localhost:8083/connectors -d 
> '\{"name":"hdfs-sink-connector-10oct-03", "config": 
> {"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector", 
> "tasks.max":"1", "topics":"connect_10oct_03", "hdfs.url": 
> "hdfs://localhost:8020/tmp/", "flush.size":"1", "hive.integration": "true", 
> "hive.metastore.uris": "thrift://localhost:9083", "hive.database": "rohit", 
> "schema.compatibility": "BACKWARD"}}'
> {"name":"hdfs-sink-connector-10oct-03","config":\\{"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector","tasks.max":"1","topics":"connect_10oct_03","hdfs.url":"hdfs://localhost:8020/tmp/","flush.size":"1","hive.integration":"true","hive.metastore.uris":"thrift://localhost:9083","hive.database":"rohit","schema.compatibility":"BACKWARD","name":"hdfs-sink-connector-10oct-03"}
> ,"tasks":[],"type":null}bash-4.2#
> bash-4.2#
>  
> bash-4.2# curl 
> http://localhost:8083/connectors/hdfs-sink-connector-10oct-03/status
> {"name":"hdfs-sink-connector-10oct-03","connector":\\{"state":"RUNNING","worker_id":"localhost:8083"}
> ,"tasks":[\\{"state":"FAILED","trace":"org.apache.kafka.connect.errors.ConnectException:
>  Exiting WorkerSinkTask due to unrecoverable exception.\n\tat 
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat
>  org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat 
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number\n\tat 
> org.apache.kafka.connect.data.SchemaProjector.projectPrimitive(SchemaProjector.java:164)\n\tat
>  
> 

Re: [DISCUSS] 2.0.1 bug fix release

2018-10-16 Thread Ismael Juma
Thanks for managing the release Manikumar!

Ismael

On Tue, 16 Oct 2018, 12:13 Manikumar,  wrote:

> Hi all,
>
> I would like to volunteer to be the release manager for 2.0.1 bug fix
> release.
> 2.0 was released July 30, 2018 and 44 issues are fixed so far.
>
> Please find all the resolved tickets here:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.0.1
>
> Please find the Release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
>
> If you have any JIRA in progress and would like to include it in this
> release, please discuss with your reviewer.
> There is currently only one blocking issue (
> https://issues.apache.org/jira/browse/KAFKA-7464).
>
> Next week, Once the blocking issue gets addressed,  I plan to create the
> first RC for 2.0.1 release.
>
> Thanks,
> Manikumar
>


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread McCaig, Rhys

> In your example, us-west.us-east.us-central.us-west.topic is an invalid
> "remote topic" name because us-west appears twice. MM2 will not replicate
> us-east.us-central.us-west.topic into us-west a second time, because the
> source topic already has us-west in the prefix. This is what I mean by
> "cycle detection" -- cyclical replication does not result in infinite
> recursion.

Oh - got it, it checks the entire prefix, which seems obvious to me in 
retrospect :)

Rhys


> On Oct 15, 2018, at 3:18 PM, Ryanne Dolan  wrote:
> 
> Rhys, thanks for your enthusiasm!
> 
> In your example, us-west.us-east.us-central.us-west.topic is an invalid
> "remote topic" name because us-west appears twice. MM2 will not replicate
> us-east.us-central.us-west.topic into us-west a second time, because the
> source topic already has us-west in the prefix. This is what I mean by
> "cycle detection" -- cyclical replication does not result in infinite
> recursion.
> 
> It's important to note that MM2 does NOT disallow these sort of cycles, it
> just knows how to deal with them properly.
> 
> Also notice this is done at the topic level, not per record. The records
> don't need any special header or anything for this cycle detection
> mechanism to work.
> 
> Thanks!
> Ryanne
> 
> On Mon, Oct 15, 2018 at 3:40 PM McCaig, Rhys 
> wrote:
> 
>> Hi Ryanne,
>> 
>> This KIP is fantastic. It provides a great vision for how MirrorMaker
>> should evolve in the Kafka project.
>> 
>> I have a question on cycle detection - In a scenario where I have 3
>> clusters replicating between each other, it seems it may be easy to
>> misconfigure the connectors if auto topic creation is turned on so that
>> records become replicated to increasingly longer topic names (until the
>> topic name limit is reached). Consider clusters us-west, us-central,
>> us-east:
>> 
>> us-west: topic
>> us-central: us-west.topic
>> us-east: us-central.us-west.topic
>> us-west: us-east.us-central.us-west.topic
>> us-central: us-west.us-east.us-central.us-west.topic
>> 
>> I’m not sure whether this scenario would actually justify implementing
>> additional measures to avoid such a configuration, rather than ensuring
>> that the documentation is clear on how to avoid such scenarios - would be
>> good to hear what others think on this.
>> 
>> Excited to see the discussion on this one.
>> 
>> Rhys
>> 
>>> On Oct 15, 2018, at 9:16 AM, Ryanne Dolan  wrote:
>>> 
>>> Hey y'all!
>>> 
>>> Please take a look at KIP-382:
>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
>>> 
>>> Thanks for your feedback and support.
>>> 
>>> Ryanne
>> 
>> 



Build failed in Jenkins: kafka-trunk-jdk8 #3145

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Update Streams Scala API for addition of Grouped (#5793)

[github] KAFKA-7513: Fix timing issue in SaslAuthenticatorFailureDelayTest

--
[...truncated 2.36 MB...]
org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest 

Build failed in Jenkins: kafka-2.1-jdk8 #29

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7513: Fix timing issue in 
SaslAuthenticatorFailureDelayTest

--
[...truncated 437.95 KB...]
 ^
  required: Serializer
  found:Serializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:68:
 warning: [unchecked] unchecked conversion
return  valueSerde != null ? valueSerde.deserializer() : null;
^
  required: Deserializer
  found:Deserializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:78:
 warning: [unchecked] unchecked conversion
final Serializer keySerializer = keySerde != null ? 
keySerde.serializer() : null;

  ^
  required: Serializer
  found:Serializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:79:
 warning: [unchecked] unchecked conversion
final Deserializer keyDeserializer = keySerde != null ? 
keySerde.deserializer() : null;

^
  required: Deserializer
  found:Deserializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:131:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:113:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:91:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
   ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:260:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetchAll(final long 
timeFrom, final long timeTo) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:207:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetch(final Bytes from, 
final Bytes to, final long timeFrom, final long timeTo) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:182:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
public synchronized WindowStoreIterator fetch(final Bytes key, 
final long timeFrom, final long timeTo) {
^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V 

[DISCUSS] 2.0.1 bug fix release

2018-10-16 Thread Manikumar
Hi all,

I would like to volunteer to be the release manager for 2.0.1 bug fix
release.
2.0 was released July 30, 2018 and 44 issues are fixed so far.

Please find all the resolved tickets here:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%202.0.1

Please find the Release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1

If you have any JIRA in progress and would like to include it in this
release, please discuss with your reviewer.
There is currently only one blocking issue (
https://issues.apache.org/jira/browse/KAFKA-7464).

Next week, Once the blocking issue gets addressed,  I plan to create the
first RC for 2.0.1 release.

Thanks,
Manikumar


Build failed in Jenkins: kafka-trunk-jdk11 #39

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7513: Fix timing issue in SaslAuthenticatorFailureDelayTest

--
[...truncated 1.84 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED


Re: [EXTERNAL] Incremental Cooperative Rebalancing

2018-10-16 Thread Konstantine Karantasis
Matthias, Ryanne, Rhys, Guozhang, thank you all for your comments!

Ryanne, to try to address your specific comments, let me start by saying
that a key concept behind this proposal is the concept of overlapping
'communication' with 'computation', which is known for often reducing the
overall cost (or latency if you prefer) of an operation that involves
multiple processes compared to global barrier-type synchronization. Of
course this does not rule out that there might be occasions where
stop-the-world might incur smaller overall cost. But given that we'll
always want to minimize communication and shuffling of resources and apply
more sticky heuristics in assignments, I believe that such edge cases will
be considerably fewer in practice.

Specifically here, the goal is to exchange (by revoking and reassigning)
only the necessary resources in the group and allow for the unaffected
resources to continue being used. As mentioned in the motivation section,
this is expected to have a positive effect in a number of use cases, for
which stop-the-world is too strict. Given this key distinction between
affected and unaffected resources (e.g. topic partitions, tasks in Kafka
Connect etc) I anticipate that in most cases, even for resources that need
to change hands, the overall rebalance phase will be faster (especially at
larger scale) than it is today with all the processes participating in
resource hand-off and re-assignment.

-Konstantine


On Fri, Oct 5, 2018 at 12:00 PM Guozhang Wang  wrote:

> Hello Konstantine,
>
> Thanks for the great write-up! Here are a few quick comments I have about
> the proposals:
>
> 1. For "Kubernetes process death" and "Rolling bounce" case, there is
> another parallel work on KIP-345 [1] (cc'ed contributor) that is aimed to
> mitigate these two issues, but it is relying on the fact that we can
> disable sending "leave group" request immediately on shutting down. Ideally
> if KIP-345 works well for these cases, then Simple Cooperative Rebalancing
> itself along with KIP-345 should cover most of the scenarios we've
> described in the wiki. In addition, Delayed / Incremental Imbalance
> approach can be done incrementally on top of Simple approach, so execution
> wise I'd suggest we start with the Simple approach and observe how well it
> works in practice (especially with K8s etc frameworks) before deciding if
> we should go further and implemented the more complicated ones.
>
> 2. For the "events" section, I think it may worth mentioning if there are
> any new client / coordinator failure events that need to be addressed with
> the new protocol, as we listed in the original design [2] [3]. For example,
> what if the leader received different client or resource listings during
> two consecutive rebalances?
>
> 3. It's worth mentioning what are the key ideas in the updated protocol:
>
> 3.a) In the original protocol we require every member to revoke every
> resource before joining the group, which can then be used as the
> "synchronization barrier" and hence it does not matter for clients to
> receive assignment at different point in time; in the new protocol we do
> not require members to revoke everything, but instead leveraging on the
> leader who has the "global picture" to make sure that there are no
> conflicts between those shared resources, a.k.a as the synchronization
> barrier.
> 3.b) The new fields in the Assigned / RevokedPartitions fields in the
> responses are now "deltas" instead of "overwrites" to the consumers. Any
> modules relying on it, e.g. Streams who relies on ConsumerCoordinator,
> needs to adjust their code (PartitionAssignor) correspondingly to
> incorporate this semantic changes.
>
> 4. I've added a child page under yours for illustrating the implications
> for Streams on rebalance cost reduction [4], since for Streams one key
> characteristics is that standby tasks exist to help with rebalance incurred
> unavailability, and hence need to be considered upfront how Streams should
> leverage on the new protocol along with standby tasks to achieve the better
> operational goals during rebalances.
>
>
> [1]
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Reduce+multiple+consumer+rebalances+by+specifying+member+id
> [2]
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal#KafkaClient-sideAssignmentProposal-CoordinatorStateMachine
> [3]
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design#Kafka0.9ConsumerRewriteDesign-Interestingscenariostoconsider
> [4]
>
> https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing+for+Streams
>
>
> On Thu, Oct 4, 2018 at 12:16 PM, McCaig, Rhys 
> wrote:
>
> > This is fantastic. Im really excited to see the work on this.
> >
> > > On Oct 2, 2018, at 4:22 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> > >
> > > Hey everyone,
> > >
> > > I'd like to bring to your attention a general design 

Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-16 Thread Yishun Guan
Bumping this thread up again, thanks!

On Fri, Oct 12, 2018, 4:53 PM Colin McCabe  wrote:

> On Fri, Oct 12, 2018, at 15:45, Yishun Guan wrote:
> > Hi Colin,
> >
> > Thanks for your suggestions. I have modified the current KIP with your
> > comments. However, I still think I should keep the entire list, because
> it
> > is a good way to keep track of which class need to be change, and others
> > can discuss if changes on these internal classes are necessary?
>
> Hi Yishun,
>
> I guess I don't feel that strongly about it.  If you want to keep the
> internal classes in the list, that's fine.  They don't really need to be in
> the KIP but it's OK if they're there.
>
> Thanks for working on this.  +1 (binding).
>
> best,
> Colin
>
> >
> > Thanks,
> > Yishun
> >
> > On Fri, Oct 12, 2018 at 11:42 AM Colin McCabe 
> wrote:
> >
> > > Hi Yishun,
> > >
> > > Thanks for looking at this.
> > >
> > > Under "proposed changes," it's not necessary to add a section where you
> > > demonstrate adding "implements AutoCloseable" to the code.  We know
> what
> > > adding that would look like.
> > >
> > > Can you create a full, single, list of all the classes that would be
> > > affected?  It's not necessary to write who suggested which classes in
> the
> > > KIP.  Also, I noticed some of the classes here are in "internals"
> > > packages.  Given that these are internal classes that aren't part of
> our
> > > API, it's not necessary to add them to the KIP, I think.  Since they
> are
> > > implementation details, they can be changed at any time without a KIP.
> > >
> > > The "compatibility" section should have a discussion of the fact that
> we
> > > can add the new interface without requiring any backwards-incompatible
> > > changes at the source or binary level.  In particular, it would be
> good to
> > > highlight that we are not renaming or changing the existing "close"
> methods.
> > >
> > > Under "rejected alternatives" we could explain why we chose to
> implement
> > > AutoCloseable rather than Closeable.
> > >
> > > cheers,
> > > Colin
> > >
> > >
> > > On Thu, Oct 11, 2018, at 13:48, Yishun Guan wrote:
> > > > Hi,
> > > >
> > > > Just to bump this voting thread up again. Thanks!
> > > >
> > > > Best,
> > > > Yishun
> > > > On Fri, Oct 5, 2018 at 12:58 PM Yishun Guan 
> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > I think we have discussed this well enough to put this into a vote.
> > > > >
> > > > > Suggestions are welcome!
> > > > >
> > > > > Best,
> > > > > Yishun
> > > > >
> > > > > On Wed, Oct 3, 2018, 2:30 PM Yishun Guan 
> wrote:
> > > > >>
> > > > >> Hi All,
> > > > >>
> > > > >> I want to start a voting on this KIP:
> > > > >>
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> > > > >>
> > > > >> Here is the discussion thread:
> > > > >>
> > >
> https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
> > > > >>
> > > > >> Thanks,
> > > > >> Yishun
> > >
>


[jira] [Resolved] (KAFKA-7196) Remove heartbeat delayed operation for those removed consumers at the end of each rebalance

2018-10-16 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7196.

Resolution: Fixed

> Remove heartbeat delayed operation for those removed consumers at the end of 
> each rebalance
> ---
>
> Key: KAFKA-7196
> URL: https://issues.apache.org/jira/browse/KAFKA-7196
> Project: Kafka
>  Issue Type: Bug
>  Components: core, purgatory
>Reporter: Lincong Li
>Assignee: Lincong Li
>Priority: Minor
> Fix For: 1.0.3, 1.1.2, 2.0.1, 2.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> During the consumer group rebalance, when the joining group phase finishes, 
> the heartbeat delayed operation of the consumer that fails to rejoin the 
> group should be removed from the purgatory. Otherwise, even though the member 
> ID of the consumer has been removed from the group, its heartbeat delayed 
> operation is still registered in the purgatory and the heartbeat delayed 
> operation is going to timeout and then another unnecessary rebalance is 
> triggered because of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Help Needed

2018-10-16 Thread Vikas Talegaonkar
Hello Kafka Dev,
 We need help on lagging issue we are seeing on one of the environment which 
doesn’t have much load.  We are running kafka on multiple environement, and on 
one of our environemnt we do see events are taking huge time (some time more 
then a day) to get process from kafka. The topic have two partition, 3 
replicase and two consumers are running on it (So one to one mapping between 
partition and consumer). When i run kafka-consumer-group.sh to find the stats, 
i can see lag on one of the consumer and then lag move to another consumer 
after some time, and they keep switching with time and increase time to process 
events. So look to me rebalancing is happening but at the same time consumer-id 
is same so consumer not getting started in between. We also tried to restart 
and kafka and zookeeper but end result is same, here is the detail.


[2018-10-12 03:52:21,676] WARN Removing server circle2-kafka2:909 from 
bootstrap.servers as DNS resolution failed for circle2-kafka2 
(org.apache.kafka.clients.ClientUtils)
group-es
group-rds

[vikas@circle1-kafka1 kafka]$ ./bin/kafka-consumer-groups.sh --bootstrap-server 
circle1-kafka1:9092,circle2-kafka2:9092, circle1-kafka3 -describe -group 
group-rds
Note: This will not show information about old Zookeeper-based consumers.
[2018-10-12 03:53:06,226] WARN Removing server circle2-kafka2:9092 from 
bootstrap.servers as DNS resolution failed for circle2-kafka2 
(org.apache.kafka.clients.ClientUtils)
[2018-10-12 03:53:06,436] WARN Removing server circle2-kafka2:9092 from 
bootstrap.servers as DNS resolution failed for circle2-kafka2 
(org.apache.kafka.clients.ClientUtils)

TOPIC   PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
CONSUMER-ID 
  HOSTCLIENT-ID
topic.events1  45471   45471   0   
data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds-dc1cb0e1-48fb-40c5-bd96-0e9980e1083d
 /172.27.4.133   data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds
topic.events0  344987  346323  1336
data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds-3a13af04-048f-40b4-9b09-b74a9600dfd8
 /172.27.4.133   data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds



[vikas@circle1-kafka1 kafka]$ ./bin/kafka-consumer-groups.sh --bootstrap-server 
circle1-kafka1:9092,circle2-kafka2:9092,circle1-kafka3 -describe -group 
group-rds
Note: This will not show information about old Zookeeper-based consumers.
[2018-10-12 04:04:29,725] WARN Removing server circle2-kafka2:9092 from 
bootstrap.servers as DNS resolution failed for circle2-kafka2 
(org.apache.kafka.clients.ClientUtils)
[2018-10-12 04:04:29,926] WARN Removing server circle2-kafka2:9092 from 
bootstrap.servers as DNS resolution failed for circle2-kafka2 
(org.apache.kafka.clients.ClientUtils)

TOPIC   PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG 
CONSUMER-ID 
  HOSTCLIENT-ID
topic.events1  44873   45471   598 
data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds-dc1cb0e1-48fb-40c5-bd96-0e9980e1083d
 /172.27.4.133   data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds
topic.events0  346324  346324  0   
data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds-3a13af04-048f-40b4-9b09-b74a9600dfd8
 /172.27.4.133   data-consumer-i-00404a50d7551ef37-circle1-ecs2-group-rds



Here is the info of kafka env
1)Version -> kafka_2.11-1.1.0

2)Zookeeper setting -> Default

3)kafka setting -> Most of the settings are default, here are few specific 
changes we have done
zookeeper.connection.timeout.ms=6000
#Setting the replication for nodes under the default of 3
default.replication.factor=3
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
config.storage.replication.factor=3
offset.storage.replication.factor=3
status.storage.replication.factor=3
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
log.retention.hours=24

Please do let me know in case you need more detail from my end. 

Your quick help is much appreciated, in case you are not able to help or i am 
at wrong group then please point me at right group. 

Regards,
Vikas



Re: [DISCUSS] KIP-354 Time-based log compaction policy

2018-10-16 Thread xiongqi wu
Mayuresh,

Thanks for the comments.
The requirement is that we need to pick up segments that are older than
maxCompactionLagMs for compaction.
maxCompactionLagMs is an upper-bound, which implies that picking up
segments for compaction earlier doesn't violated the policy.
We use the creation time of a segment as an estimation of its records
arrival time, so these records can be compacted no later than
maxCompactionLagMs.

On the other hand, compaction is an expensive operation, we don't want to
compact the log partition whenever a new segment is sealed.
Therefore, we want to pick up a segment for compaction when the segment is
closed to mandatory max compaction lag (so we use segment creation time as
an estimation.)


Xiongqi (Wesley) Wu


On Mon, Oct 15, 2018 at 5:54 PM Mayuresh Gharat 
wrote:

> Hi Wesley,
>
> Thanks for the KIP and sorry for being late to the party.
>  I wanted to understand, the scenario you mentioned in Proposed changes :
>
> -
> >
> > Estimate the earliest message timestamp of an un-compacted log segment.
> we
> > only need to estimate earliest message timestamp for un-compacted log
> > segments to ensure timely compaction because the deletion requests that
> > belong to compacted segments have already been processed.
> >
> >1.
> >
> >for the first (earliest) log segment:  The estimated earliest
> >timestamp is set to the timestamp of the first message if timestamp is
> >present in the message. Otherwise, the estimated earliest timestamp
> is set
> >to "segment.largestTimestamp - maxSegmentMs”
> > (segment.largestTimestamp is lastModified time of the log segment or
> max
> >timestamp we see for the log segment.). In the later case, the actual
> >timestamp of the first message might be later than the estimation,
> but it
> >is safe to pick up the log for compaction earlier.
> >
> > When we say "actual timestamp of the first message might be later than
> the
> estimation, but it is safe to pick up the log for compaction earlier.",
> doesn't that violate the assumption that we will consider a segment for
> compaction only if the time of creation the segment has crossed the "now -
> maxCompactionLagMs" ?
>
> Thanks,
>
> Mayuresh
>
> On Mon, Sep 3, 2018 at 7:28 PM Brett Rann 
> wrote:
>
> > Might also be worth moving to a vote thread? Discussion seems to have
> gone
> > as far as it can.
> >
> > > On 4 Sep 2018, at 12:08, xiongqi wu  wrote:
> > >
> > > Brett,
> > >
> > > Yes, I will post PR tomorrow.
> > >
> > > Xiongqi (Wesley) Wu
> > >
> > >
> > > On Sun, Sep 2, 2018 at 6:28 PM Brett Rann 
> > wrote:
> > >
> > > > +1 (non-binding) from me on the interface. I'd like to see someone
> > familiar
> > > > with
> > > > the code comment on the approach, and note there's a couple of
> > different
> > > > approaches: what's documented in the KIP, and what Xiaohe Dong was
> > working
> > > > on
> > > > here:
> > > >
> > > >
> >
> https://github.com/dongxiaohe/kafka/tree/dongxiaohe/log-cleaner-compaction-max-lifetime-2.0
> > > >
> > > > If you have code working already Xiongqi Wu could you share a PR? I'd
> > be
> > > > happy
> > > > to start testing.
> > > >
> > > > On Tue, Aug 28, 2018 at 5:57 AM xiongqi wu 
> > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Do you have any additional comments on this KIP?
> > > > >
> > > > >
> > > > > On Thu, Aug 16, 2018 at 9:17 PM, xiongqi wu 
> > wrote:
> > > > >
> > > > > > on 2)
> > > > > > The offsetmap is built starting from dirty segment.
> > > > > > The compaction starts from the beginning of the log partition.
> > That's
> > > > how
> > > > > > it ensure the deletion of tomb keys.
> > > > > > I will double check tomorrow.
> > > > > >
> > > > > > Xiongqi (Wesley) Wu
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 16, 2018 at 6:46 PM Brett Rann
> > 
> > > > > > wrote:
> > > > > >
> > > > > >> To just clarify a bit on 1. whether there's an external
> storage/DB
> > > > isn't
> > > > > >> relevant here.
> > > > > >> Compacted topics allow a tombstone record to be sent (a null
> value
> > > > for a
> > > > > >> key) which
> > > > > >> currently will result in old values for that key being deleted
> if
> > some
> > > > > >> conditions are met.
> > > > > >> There are existing controls to make sure the old values will
> stay
> > > > around
> > > > > >> for a minimum
> > > > > >> time at least, but no dedicated control to ensure the tombstone
> > will
> > > > > >> delete
> > > > > >> within a
> > > > > >> maximum time.
> > > > > >>
> > > > > >> One popular reason that maximum time for deletion is desirable
> > right
> > > > now
> > > > > >> is
> > > > > >> GDPR with
> > > > > >> PII. But we're not proposing any GDPR awareness in kafka, just
> > being
> > > > > able
> > > > > >> to guarantee
> > > > > >> a max time where a tombstoned key will be removed from the
> > compacted
> > > > > >> topic.
> > > > > >>
> > > > > >> on 2)
> > > > > >> huh, i thought it kept track of the first dirty segment and
> didn't
> > > > > >> 

Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-10-16 Thread Dong Lin
Hey everyone,

Thanks for all the contribution! Just a kind reminder that the code is now
frozen for 2.1.0 release.

Thanks,
Dong

On Mon, Oct 1, 2018 at 4:31 PM Dong Lin  wrote:

> Hey everyone,
>
> Hope things are going well!
>
> Just a kind reminder that the feature freeze time is end of day today.
> Major features (e.g. KIP implementation) need to be merged and minor
> features need to be have PR ready. After today I will move pending JIRAs to
> the next release as appropriate.
>
> Cheers,
> Dong
>
>
>
> On Wed, 26 Sep 2018 at 1:06 AM Dong Lin  wrote:
>
>> Hey Edoardo,
>>
>> Certainly, let's add it to this release since it has passed vote. KIP-81
>> was previously marked as WIP for 2.0.0 release. I updated to be WIP for
>> 2.1.0 release in
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>>  and
>> added it to
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
>> .
>>
>> Thanks,
>> Dong
>>
>> On Tue, Sep 25, 2018 at 1:52 AM Edoardo Comar  wrote:
>>
>>> Hi Dong
>>> many thanks for driving the release!
>>>
>>> KIP-81 previously voted as adopted has a ready-to-review JIRA and PR.
>>> Shall we just amend the wiki ?
>>> --
>>> Edoardo Comar
>>> IBM Event Streams
>>> IBM UK Ltd, Hursley Park, SO21 2JN
>>>
>>>
>>>
>>>
>>> From:Dong Lin 
>>> To:dev , Users ,
>>> kafka-clients 
>>> Date:25/09/2018 08:17
>>> Subject:Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan
>>> --
>>>
>>>
>>>
>>> Hey everyone,
>>>
>>> According to the previously discussed schedule, I have updated
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
>>> to include all the KIPs that have passed vote at this moment to be
>>> included
>>> in Apache Kafka 2.1.0 release. Other KIPs that have not passed vote will
>>> need to be included in the next feature release.
>>>
>>> Just a reminder that the feature freeze data is Oct 1, 2018. In order to
>>> be
>>> included in the release, major features need to be merged and minor
>>> features need to be have PR ready. Any feature not in this state will be
>>> automatically moved to the next release after Oct 1.
>>>
>>> Regards,
>>> Dong
>>>
>>> On Tue, Sep 25, 2018 at 12:02 AM Dong Lin  wrote:
>>>
>>> > cc us...@kafka.apache.org and kafka-clie...@googlegroups.com
>>> >
>>> > On Sun, Sep 9, 2018 at 5:31 PM Dong Lin  wrote:
>>> >
>>> >> Hi all,
>>> >>
>>> >> I would like to be the release manager for our next time-based feature
>>> >> release 2.1.0.
>>> >>
>>> >> The recent Kafka release history can be found at
>>> >> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
>>> .
>>> >> The release plan (with open issues and planned KIPs) for 2.1.0 can be
>>> found
>>> >> at
>>> >>
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
>>> >> .
>>> >>
>>> >> Here are the dates we have planned for Apache Kafka 2.1.0 release:
>>> >>
>>> >> 1) KIP Freeze: Sep 24, 2018.
>>> >> A KIP must be accepted by this date in order to be considered for this
>>> >> release)
>>> >>
>>> >> 2) Feature Freeze: Oct 1, 2018
>>> >> Major features merged & working on stabilization, minor features have
>>> PR,
>>> >> release branch cut; anything not in this state will be automatically
>>> moved
>>> >> to the next release in JIRA.
>>> >>
>>> >> 3) Code Freeze: Oct 15, 2018 (Tentatively)
>>> >>
>>> >> The KIP and feature freeze date is about 3-4 weeks from now. Please
>>> plan
>>> >> accordingly for the features you want push into Apache Kafka 2.1.0
>>> release.
>>> >>
>>> >>
>>> >> Cheers,
>>> >> Dong
>>> >>
>>> >
>>>
>>>
>>>
>>> Unless stated otherwise above:
>>> IBM United Kingdom Limited - Registered in England and Wales with number
>>> 741598.
>>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
>>> 3AU
>>>
>>


Re: [DISCUSS] KIP-381 Connect: Tell about records that had their offsets flushed in callback

2018-10-16 Thread Ryanne Dolan
Steff,

> Guess people have used it, assuming that all records that have been
polled > at the time of callback to "commit", have also had their offsets
committed. > But that is not true.

(excerpt from KIP)

The documentation for SourceTask.commit() reads:

> Commit the offsets, up to the offsets that have been returned by {@link
#poll()}. This > method should block until the commit is complete.

I'm confused by these seemingly contradictory statements. My assumption (as
you say) is that all records returned by poll() will have been committed
before commit() is invoked by the framework. Is that not the case?

Ryanne

On Wed, Oct 10, 2018 at 8:50 AM Per Steffensen  wrote:

> Please help make the proposed changes in KIP-381 become reality. Please
> comment.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-381%3A+Connect%3A+Tell+about+records+that+had+their+offsets+flushed+in+callback
>
> JIRA: https://issues.apache.org/jira/browse/KAFKA-5716
>
> PR: https://github.com/apache/kafka/pull/3872
>
> Thanks!
>
>
>


Build failed in Jenkins: kafka-2.1-jdk8 #28

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Update Streams Scala API for addition of Grouped (#5793)

--
[...truncated 437.92 KB...]
 ^
  required: Serializer
  found:Serializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:68:
 warning: [unchecked] unchecked conversion
return  valueSerde != null ? valueSerde.deserializer() : null;
^
  required: Deserializer
  found:Deserializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:78:
 warning: [unchecked] unchecked conversion
final Serializer keySerializer = keySerde != null ? 
keySerde.serializer() : null;

  ^
  required: Serializer
  found:Serializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:79:
 warning: [unchecked] unchecked conversion
final Deserializer keyDeserializer = keySerde != null ? 
keySerde.deserializer() : null;

^
  required: Deserializer
  found:Deserializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:131:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:113:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:91:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
   ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:260:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetchAll(final long 
timeFrom, final long timeTo) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:207:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetch(final Bytes from, 
final Bytes to, final long timeFrom, final long timeTo) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:182:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
public synchronized WindowStoreIterator fetch(final Bytes key, 
final long timeFrom, final long timeTo) {
^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends 

Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Ryanne Dolan
> Could you comment on the approach of
> your method vs. using other open source tools like Uber's uReplicator or
> the recently open-sourced Mirus from Salesforce?

Eno, a primary differentiator is that KIP-382 is "opinionated" about how
replication should be done, e.g. by applying topic renaming by default,
which enables generic tooling that the entire community can benefit from,
while avoiding many pitfalls that aren't obvious except at very large scale.

Existing solutions are fine for copying records between clusters, but there
is no solution that provides a holistic strategy for backup, disaster
recovery, consumer migration, failover/failback, etc. These require
consistent record order, semantic partitioning, inter-cluster checkpoints,
offset translation, heartbeats, and related tooling. Nothing in the wild
today provides these essential features. And without a proper foundation to
build on, there's no way to solve these problems generically.

What we have today is a proliferation of similar tools that leave the hard
parts to individual teams to figure out. I'm endeavoring to solve these
problems for the entire community.

Thanks for your support!
Ryanne

On Tue, Oct 16, 2018 at 5:36 AM Eno Thereska  wrote:

> This update is much needed, thank you! Could you comment on the approach of
> your method vs. using other open source tools like Uber's uReplicator or
> the recently open-sourced Mirus from Salesforce? (
> https://engineering.salesforce.com/open-sourcing-mirus-3ec2c8a38537). I
> strongly believe Mirrormaker itself needs an upgrade, so I'm not
> questioning that, but more on the technical side of the solution.
>
> Thanks
> Eno
>
> On Mon, Oct 15, 2018 at 11:19 PM Ryanne Dolan 
> wrote:
>
> > Rhys, thanks for your enthusiasm!
> >
> > In your example, us-west.us-east.us-central.us-west.topic is an invalid
> > "remote topic" name because us-west appears twice. MM2 will not replicate
> > us-east.us-central.us-west.topic into us-west a second time, because the
> > source topic already has us-west in the prefix. This is what I mean by
> > "cycle detection" -- cyclical replication does not result in infinite
> > recursion.
> >
> > It's important to note that MM2 does NOT disallow these sort of cycles,
> it
> > just knows how to deal with them properly.
> >
> > Also notice this is done at the topic level, not per record. The records
> > don't need any special header or anything for this cycle detection
> > mechanism to work.
> >
> > Thanks!
> > Ryanne
> >
> > On Mon, Oct 15, 2018 at 3:40 PM McCaig, Rhys 
> > wrote:
> >
> > > Hi Ryanne,
> > >
> > > This KIP is fantastic. It provides a great vision for how MirrorMaker
> > > should evolve in the Kafka project.
> > >
> > > I have a question on cycle detection - In a scenario where I have 3
> > > clusters replicating between each other, it seems it may be easy to
> > > misconfigure the connectors if auto topic creation is turned on so that
> > > records become replicated to increasingly longer topic names (until the
> > > topic name limit is reached). Consider clusters us-west, us-central,
> > > us-east:
> > >
> > > us-west: topic
> > > us-central: us-west.topic
> > > us-east: us-central.us-west.topic
> > > us-west: us-east.us-central.us-west.topic
> > > us-central: us-west.us-east.us-central.us-west.topic
> > >
> > > I’m not sure whether this scenario would actually justify implementing
> > > additional measures to avoid such a configuration, rather than ensuring
> > > that the documentation is clear on how to avoid such scenarios - would
> be
> > > good to hear what others think on this.
> > >
> > > Excited to see the discussion on this one.
> > >
> > > Rhys
> > >
> > > > On Oct 15, 2018, at 9:16 AM, Ryanne Dolan 
> > wrote:
> > > >
> > > > Hey y'all!
> > > >
> > > > Please take a look at KIP-382:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> > > >
> > > > Thanks for your feedback and support.
> > > >
> > > > Ryanne
> > >
> > >
> >
>


Re: [DISCUSS] KIP-377: TopicCommand to use AdminClient

2018-10-16 Thread Viktor Somogyi-Vass
Hi Colin,

Thanks, it makes sense and simplifies this KIP tremendously. I'll move this
section to the rejected alternatives with a note that KIP-142 will have
this feature.
On a similar note: is there a KIP for describe topics protocol or have you
been thinking about it? I guess there it's the same problem, we often don't
want to forward the entire metadata.

Viktor

On Fri, Oct 12, 2018 at 12:03 PM Colin McCabe  wrote:

> Hi Viktor,
>
> Thanks for bumping this thread.
>
> I think we should just focus on transitioning the TopicCommand to use
> AdminClient, and talk about protocol changes in a separate KIP.  Protocol
> changes often involve a lot of discussion.  This does mean that we couldn't
> implement the "list topics under deletion" feature when using AdminClient
> at the moment.  We could add a note to the tool output indicating this.
>
> We should move the protocol discussion to a separate thread.  Probably
> also look at KIP-142 as well.
>
> best,
> Colin
>
>
> On Tue, Oct 9, 2018, at 07:45, Viktor Somogyi-Vass wrote:
> > Hi All,
> >
> > Would like to bump this as the conversation sank a little bit, but more
> > importantly I'd like to validate my plans/ideas on extending the Metadata
> > protocol. I was thinking about two other alternatives, namely:
> > 1. Create a ListTopicUnderDeletion protocol. This however would be
> > unnecessary: it'd have one very narrow functionality which we can't
> extend.
> > I'd make sense to have a list topics or describe topics protocol where we
> > can list/describe topics under deletion but for normal listing/describing
> > we already use the metadata, so it would be a duplication of
> functionality.
> > 2. DeleteTopicsResponse could return the topics under deletion if the
> > request's argument list is empty which might make sense at the first
> look,
> > but actually we'd mix the query functionality with the delete
> functionality
> > which is counterintuitive.
> >
> > Even though most clients won't need these "limbo" topics (which are under
> > deletion) in the foreseeable future, it can be considered as part of the
> > cluster state or metadata and to me it makes sense. Also it doesn't have
> a
> > big overhead in the response size as typically users don't delete topics
> > too often as far as I experienced.
> >
> > I'd be happy to receive some ideas/feedback on this.
> >
> > Cheers,
> > Viktor
> >
> >
> > On Fri, Sep 28, 2018 at 4:51 PM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > I made an update to the KIP. Just in short:
> > > Currently KafkaAdminClient.describeTopics() and
> > > KafkaAdminClient.listTopics() uses the Metadata protocol to acquire
> topic
> > > information. The returned response however won't contain the topics
> that
> > > are under deletion but couldn't complete yet (for instance because of
> some
> > > replicas offline), therefore it is not possible to implement the
> current
> > > command's "marked for deletion" feature. To get around this I
> introduced
> > > some changes in the Metadata protocol.
> > >
> > > Thanks,
> > > Viktor
> > >
> > > On Fri, Sep 28, 2018 at 4:48 PM Viktor Somogyi-Vass <
> > > viktorsomo...@gmail.com> wrote:
> > >
> > >> Hi Mickael,
> > >>
> > >> Thanks for the feedback, I also think that many customers wanted this
> for
> > >> a long time.
> > >>
> > >> Cheers,
> > >> Viktor
> > >>
> > >> On Fri, Sep 28, 2018 at 11:45 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hi Viktor,
> > >>> Thanks for taking this task!
> > >>> This is a very nice change as it will allow users to use this tool in
> > >>> many Cloud environments where direct zookeeper access is not
> > >>> available.
> > >>>
> > >>>
> > >>> On Thu, Sep 27, 2018 at 10:34 AM Viktor Somogyi-Vass
> > >>>  wrote:
> > >>> >
> > >>> > Hi All,
> > >>> >
> > >>> > This is the continuation of the old KIP-375 with the same title:
> > >>> >
> > >>>
> https://lists.apache.org/thread.html/dc71d08de8cd2f082765be22c9f88bc9f8b39bb8e0929a3a4394e9da@%3Cdev.kafka.apache.org%3E
> > >>> >
> > >>> > The problem there was that two KIPs were created around the same
> time
> > >>> and I
> > >>> > chose to reorganize mine a bit and give it a new number to avoid
> > >>> > duplication.
> > >>> >
> > >>> > The KIP summary here once again:
> > >>> >
> > >>> > I wrote up a relatively simple KIP about improving the Kafka
> protocol
> > >>> and
> > >>> > the TopicCommand tool to support the new Java based AdminClient and
> > >>> > hopefully to deprecate the Zookeeper side of it.
> > >>> >
> > >>> > I would be happy to receive some opinions about this. In general I
> > >>> think
> > >>> > this would be an important addition as this is one of the few left
> but
> > >>> > important tools that still uses direct Zookeeper connection.
> > >>> >
> > >>> > Here is the link for the KIP:
> > >>> >
> > >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-377%3A+TopicCommand+to+use+AdminClient
> > >>> >
> > >>> > 

Build failed in Jenkins: kafka-trunk-jdk11 #38

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Update Streams Scala API for addition of Grouped (#5793)

--
[...truncated 1.84 MB...]
org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation STARTED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3144

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7496: Handle invalid filters gracefully in

--
[...truncated 2.35 MB...]
org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > withSchema PASSED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless STARTED

org.apache.kafka.connect.transforms.ValueToKeyTest > schemaless PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testNullList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testEmptyList PASSED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList STARTED

org.apache.kafka.connect.transforms.util.NonEmptyListValidatorTest > 
testValidList PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime 

[jira] [Resolved] (KAFKA-7513) Flaky test SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain

2018-10-16 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7513.
---
Resolution: Fixed
  Reviewer: Ismael Juma

> Flaky test SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain
> -
>
> Key: KAFKA-7513
> URL: https://issues.apache.org/jira/browse/KAFKA-7513
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> Have seen this test fail quite a few times in PR builds (e.g. 
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/123):
> {code}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.kafka.common.network.NetworkTestUtils.waitForChannelClose(NetworkTestUtils.java:114)
>   at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.createAndCheckClientConnectionFailure(SaslAuthenticatorFailureDelayTest.java:223)
>   at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.createAndCheckClientAuthenticationFailure(SaslAuthenticatorFailureDelayTest.java:212)
>   at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain(SaslAuthenticatorFailureDelayTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Ryanne Dolan
>  But one big obstacle in this was
always that group coordination happened on the source cluster.

Jan, thank you for bringing up this issue with legacy MirrorMaker. I
totally agree with you. This is one of several problems with MirrorMaker I
intend to solve in MM2, and I already have a design and prototype that
solves this and related issues. But as you pointed out, this KIP is already
rather complex, and I want to focus on the core feature set rather than
performance optimizations for now. If we can agree on what MM2 looks like,
it will be very easy to agree to improve its performance and reliability.

That said, I look forward to your support on a subsequent KIP that
addresses consumer coordination and rebalance issues. Stay tuned!

Ryanne

On Tue, Oct 16, 2018 at 6:58 AM Jan Filipiak 
wrote:

> Hi,
>
> Currently MirrorMaker is usually run collocated with the target cluster.
> This is all nice and good. But one big obstacle in this was
> always that group coordination happened on the source cluster. So when
> then network was congested, you sometimes loose group membership and
> have to rebalance and all this.
>
> So one big request from we would be the support of having coordination
> cluster != source cluster.
>
> I would generally say a LAN is better than a WAN for doing group
> coordinaton and there is no reason we couldn't have a group consuming
> topics from a different cluster and committing offsets to another one
> right?
>
> Other than that. It feels like the KIP has too much features where many
> of them are not really wanted and counter productive but I will just
> wait and see how the discussion goes.
>
> Best Jan
>
>
> On 15.10.2018 18:16, Ryanne Dolan wrote:
> > Hey y'all!
> >
> > Please take a look at KIP-382:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> >
> > Thanks for your feedback and support.
> >
> > Ryanne
> >
>


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Jan Filipiak
no worries,

glad i could clarify

On 16.10.2018 15:14, Andrew Otto wrote:
> O ok apologies. Interesting!
>
> On Tue, Oct 16, 2018 at 9:06 AM Jan Filipiak 
> wrote:
>
>> Hi Andrew,
>>
>> thanks for your message, you missed my point.
>>
>> Mirrormaker collocation with target is for sure correct.
>> But then group coordination happens across WAN which is unnecessary.
>> And I request to be thought about again.
>> I made a PR back then for zk Consumer to allow having 2 zookeeper
>> connects. One for group coordination one for broker and topic discovery.
>>
>> I am requesting this to be added to the kip so that the target cluster
>> can become the group coordinator.
>>
>>
>>
>> On 16.10.2018 15:04, Andrew Otto wrote:
 I would generally say a LAN is better than a WAN for doing group
 coordinaton
>>>
>>> For sure, but a LAN is better than a WAN for producing messages too.  If
>>> there is network congestion during network production, messages will be
>>> dropped.  With MirrorMaker currently, you can either skip these dropped
>>> messages, or have the MirrorMaker processes themselves die on produce
>>> failure, which will also cause (a series) of MirrorMaker consumer
>>> rebalances.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Oct 16, 2018 at 7:58 AM Jan Filipiak 
>>> wrote:
>>>
 Hi,

 Currently MirrorMaker is usually run collocated with the target cluster.
 This is all nice and good. But one big obstacle in this was
 always that group coordination happened on the source cluster. So when
 then network was congested, you sometimes loose group membership and
 have to rebalance and all this.

 So one big request from we would be the support of having coordination
 cluster != source cluster.

 I would generally say a LAN is better than a WAN for doing group
 coordinaton and there is no reason we couldn't have a group consuming
 topics from a different cluster and committing offsets to another one
 right?

 Other than that. It feels like the KIP has too much features where many
 of them are not really wanted and counter productive but I will just
 wait and see how the discussion goes.

 Best Jan


 On 15.10.2018 18:16, Ryanne Dolan wrote:
> Hey y'all!
>
> Please take a look at KIP-382:
>
>

>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
>
> Thanks for your feedback and support.
>
> Ryanne
>

>>>
>>
>


[jira] [Resolved] (KAFKA-7496) KafkaAdminClient#describeAcls should handle invalid filters gracefully

2018-10-16 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7496.
---
   Resolution: Fixed
 Reviewer: Rajini Sivaram
Fix Version/s: 2.1.0

> KafkaAdminClient#describeAcls should handle invalid filters gracefully
> --
>
> Key: KAFKA-7496
> URL: https://issues.apache.org/jira/browse/KAFKA-7496
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
> Fix For: 2.1.0
>
>
> KafkaAdminClient#describeAcls should handle invalid filters gracefully.  
> Specifically, it should return a future which yields an exception.
> The following code results in an uncaught IllegalArgumentException in the 
> admin client thread, resulting in a zombie admin client.
> {code}
> AclBindingFilter aclFilter = new AclBindingFilter(
> new ResourcePatternFilter(ResourceType.UNKNOWN, null, PatternType.ANY),
> AccessControlEntryFilter.ANY
> );
> kafkaAdminClient.describeAcls(aclFilter).values().get();
> {code}
> See the resulting stacktrace below
> {code}
> ERROR [kafka-admin-client-thread | adminclient-3] Uncaught exception in 
> thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> java.lang.IllegalArgumentException: Filter contain UNKNOWN elements
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest.validate(DescribeAclsRequest.java:140)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest.(DescribeAclsRequest.java:92)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:77)
> at 
> org.apache.kafka.common.requests.DescribeAclsRequest$Builder.build(DescribeAclsRequest.java:67)
> at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:450)
> at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:411)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:910)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1107)
> at java.base/java.lang.Thread.run(Thread.java:844)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #37

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7496: Handle invalid filters gracefully in

--
[...truncated 1.84 MB...]

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > 

Build failed in Jenkins: kafka-2.1-jdk8 #27

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7496: Handle invalid filters gracefully in

--
[...truncated 438.41 KB...]
 ^
  required: Serializer
  found:Serializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:68:
 warning: [unchecked] unchecked conversion
return  valueSerde != null ? valueSerde.deserializer() : null;
^
  required: Deserializer
  found:Deserializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:78:
 warning: [unchecked] unchecked conversion
final Serializer keySerializer = keySerde != null ? 
keySerde.serializer() : null;

  ^
  required: Serializer
  found:Serializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:79:
 warning: [unchecked] unchecked conversion
final Deserializer keyDeserializer = keySerde != null ? 
keySerde.deserializer() : null;

^
  required: Deserializer
  found:Deserializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:131:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:113:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:91:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
   ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:550:
 warning: [unchecked] unchecked method invocation: constructor  in class 
KTableImpl is applied to given types
return new KTableImpl, R>(
   ^
  required: 
String,Serde,Serde,Set,String,boolean,ProcessorSupplier,StreamsGraphNode,InternalStreamsBuilder
  found: 
String,Serde,Serde,Set,String,boolean,KTableKTableJoinMerger,KTableKTableJoinNode,Change,Change>,InternalStreamsBuilder
  where K,V,R,V1 are type-variables:
K extends Object declared in class KTableImpl
V extends Object declared in class KTableImpl
R extends Object declared in method 
buildJoin(AbstractStream,ValueJoiner,boolean,boolean,String,String,MaterializedInternal)
V1 extends Object declared in method 
buildJoin(AbstractStream,ValueJoiner,boolean,boolean,String,String,MaterializedInternal)
:61:
 warning: [unchecked] unchecked method invocation: constructor  in class 
WindowedStreamPartitioner is applied to given types
final StreamPartitioner windowedPartitioner = 
(StreamPartitioner) new WindowedStreamPartitioner((WindowedSerializer) keySerializer);

  ^
  required: WindowedSerializer
  found: WindowedSerializer
  where K is a type-variable:
K extends Object declared in class WindowedStreamPartitioner

Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Andrew Otto
O ok apologies. Interesting!

On Tue, Oct 16, 2018 at 9:06 AM Jan Filipiak 
wrote:

> Hi Andrew,
>
> thanks for your message, you missed my point.
>
> Mirrormaker collocation with target is for sure correct.
> But then group coordination happens across WAN which is unnecessary.
> And I request to be thought about again.
> I made a PR back then for zk Consumer to allow having 2 zookeeper
> connects. One for group coordination one for broker and topic discovery.
>
> I am requesting this to be added to the kip so that the target cluster
> can become the group coordinator.
>
>
>
> On 16.10.2018 15:04, Andrew Otto wrote:
> >> I would generally say a LAN is better than a WAN for doing group
> >> coordinaton
> >
> > For sure, but a LAN is better than a WAN for producing messages too.  If
> > there is network congestion during network production, messages will be
> > dropped.  With MirrorMaker currently, you can either skip these dropped
> > messages, or have the MirrorMaker processes themselves die on produce
> > failure, which will also cause (a series) of MirrorMaker consumer
> > rebalances.
> >
> >
> >
> >
> >
> >
> > On Tue, Oct 16, 2018 at 7:58 AM Jan Filipiak 
> > wrote:
> >
> >> Hi,
> >>
> >> Currently MirrorMaker is usually run collocated with the target cluster.
> >> This is all nice and good. But one big obstacle in this was
> >> always that group coordination happened on the source cluster. So when
> >> then network was congested, you sometimes loose group membership and
> >> have to rebalance and all this.
> >>
> >> So one big request from we would be the support of having coordination
> >> cluster != source cluster.
> >>
> >> I would generally say a LAN is better than a WAN for doing group
> >> coordinaton and there is no reason we couldn't have a group consuming
> >> topics from a different cluster and committing offsets to another one
> >> right?
> >>
> >> Other than that. It feels like the KIP has too much features where many
> >> of them are not really wanted and counter productive but I will just
> >> wait and see how the discussion goes.
> >>
> >> Best Jan
> >>
> >>
> >> On 15.10.2018 18:16, Ryanne Dolan wrote:
> >>> Hey y'all!
> >>>
> >>> Please take a look at KIP-382:
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> >>>
> >>> Thanks for your feedback and support.
> >>>
> >>> Ryanne
> >>>
> >>
> >
>


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Jan Filipiak

Hi Andrew,

thanks for your message, you missed my point.

Mirrormaker collocation with target is for sure correct.
But then group coordination happens across WAN which is unnecessary.
And I request to be thought about again.
I made a PR back then for zk Consumer to allow having 2 zookeeper 
connects. One for group coordination one for broker and topic discovery.


I am requesting this to be added to the kip so that the target cluster
can become the group coordinator.



On 16.10.2018 15:04, Andrew Otto wrote:

I would generally say a LAN is better than a WAN for doing group
coordinaton


For sure, but a LAN is better than a WAN for producing messages too.  If
there is network congestion during network production, messages will be
dropped.  With MirrorMaker currently, you can either skip these dropped
messages, or have the MirrorMaker processes themselves die on produce
failure, which will also cause (a series) of MirrorMaker consumer
rebalances.






On Tue, Oct 16, 2018 at 7:58 AM Jan Filipiak 
wrote:


Hi,

Currently MirrorMaker is usually run collocated with the target cluster.
This is all nice and good. But one big obstacle in this was
always that group coordination happened on the source cluster. So when
then network was congested, you sometimes loose group membership and
have to rebalance and all this.

So one big request from we would be the support of having coordination
cluster != source cluster.

I would generally say a LAN is better than a WAN for doing group
coordinaton and there is no reason we couldn't have a group consuming
topics from a different cluster and committing offsets to another one
right?

Other than that. It feels like the KIP has too much features where many
of them are not really wanted and counter productive but I will just
wait and see how the discussion goes.

Best Jan


On 15.10.2018 18:16, Ryanne Dolan wrote:

Hey y'all!

Please take a look at KIP-382:



https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0


Thanks for your feedback and support.

Ryanne







Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Andrew Otto
> I would generally say a LAN is better than a WAN for doing group
> coordinaton

For sure, but a LAN is better than a WAN for producing messages too.  If
there is network congestion during network production, messages will be
dropped.  With MirrorMaker currently, you can either skip these dropped
messages, or have the MirrorMaker processes themselves die on produce
failure, which will also cause (a series) of MirrorMaker consumer
rebalances.






On Tue, Oct 16, 2018 at 7:58 AM Jan Filipiak 
wrote:

> Hi,
>
> Currently MirrorMaker is usually run collocated with the target cluster.
> This is all nice and good. But one big obstacle in this was
> always that group coordination happened on the source cluster. So when
> then network was congested, you sometimes loose group membership and
> have to rebalance and all this.
>
> So one big request from we would be the support of having coordination
> cluster != source cluster.
>
> I would generally say a LAN is better than a WAN for doing group
> coordinaton and there is no reason we couldn't have a group consuming
> topics from a different cluster and committing offsets to another one
> right?
>
> Other than that. It feels like the KIP has too much features where many
> of them are not really wanted and counter productive but I will just
> wait and see how the discussion goes.
>
> Best Jan
>
>
> On 15.10.2018 18:16, Ryanne Dolan wrote:
> > Hey y'all!
> >
> > Please take a look at KIP-382:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> >
> > Thanks for your feedback and support.
> >
> > Ryanne
> >
>


[jira] [Created] (KAFKA-7515) Trogdor - Add Consumer Group Benchmark Specification

2018-10-16 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7515:
--

 Summary: Trogdor - Add Consumer Group Benchmark Specification
 Key: KAFKA-7515
 URL: https://issues.apache.org/jira/browse/KAFKA-7515
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


Trogdor's `ConsumeBenchWorker` and `ConsumeBenchSpec` currently takes specific 
topic partitions and assigns a consumer to them 
([https://github.com/apache/kafka/blob/509dd95ebbf03681ea680a84b8436814ba3e7541/tools/src/main/java/org/apache/kafka/trogdor/workload/ConsumeBenchWorker.java#L125)]

It is useful to have functionality that supports consumer group usage since 
most Kafka consumers in practice use consumer groups. This will allow for 
benchmarking more real-life scenarios like spinning up multiple consumers in a 
consumer group via spawning multiple Trogdor agents (or multiple consumers in 
one agent if https://issues.apache.org/jira/browse/KAFKA-7514 gets accepted)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7514) Trogdor - Support Multiple Threads in ConsumeBenchWorker

2018-10-16 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7514:
--

 Summary: Trogdor - Support Multiple Threads in ConsumeBenchWorker
 Key: KAFKA-7514
 URL: https://issues.apache.org/jira/browse/KAFKA-7514
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


Trogdor's ConsumeBenchWorker currently uses only two threads - one for the 
StatusUpdater:
{code:java}
this.statusUpdaterFuture = executor.scheduleAtFixedRate(
new StatusUpdater(latencyHistogram, messageSizeHistogram), 1, 1, 
TimeUnit.MINUTES);
{code}
and one for the consumer task itself
{code:java}
executor.submit(new ConsumeMessages(partitions));
{code}
A sample ConsumeBenchSpec specification in JSON looks like this:
{code:java}
{
"class": "org.apache.kafka.trogdor.workload.ConsumeBenchSpec",
"durationMs": 1000,
"consumerNode": "node0",
"bootstrapServers": "localhost:9092",
"maxMessages": 100,
"activeTopics": {
"foo[1-3]": {
"numPartitions": 3,
"replicationFactor": 1
}
}
}
{code}
 

 
h2. Motivation


This does not make the best use of machines with multiple cores. It would be 
useful if there was a way to configure the ConsumeBenchSpec to use multiple 
threads and spawn multiple consumers. This would also allow the 
ConsumeBenchWorker to work with a higher amount of throughput due to the 
parallelism taking place.

 
h2. 
Proposal:

Add a new `consumerThreads` property to the ConsumeBenchSpec allowing you to 
run multiple consumers in parallel



Changes

 

By default, it will have a value of 1.
`activeTopics` will still be defined in the same way. They will be evenly 
assigned to the consumers in a round-robin fashion.
For example, if we have this configuration
{code:java}
{
"class": "org.apache.kafka.trogdor.workload.ConsumeBenchSpec",
"durationMs": 1000,
"consumerNode": "node0",
"bootstrapServers": "localhost:9092",
"maxMessages": 100,
"consumerThreads": 2,
"activeTopics": {
"foo[1-4]": {
"numPartitions": 4,
"replicationFactor": 1
}
}
}{code}
consumer thread 1 will be assigned partitions [foo1, foo3]
consumer thread 2 will be assigned partitions [foo2, foo4]

and the ConsumeBenchWorker will spawn 4 threads in total in the executor (2 for 
every consumer)




h3. Status

The way the worker's status will be updated as well. 
A ConsumeBenchWorker shows the following status when queried with 
`./bin/trogdor.sh client --show-tasks localhost:8889`

 
{code:java}
"tasks" : { "consume_bench_19938" : { "state" : "DONE", "spec" : { "class" : 
"org.apache.kafka.trogdor.workload.ConsumeBenchSpec", 
...
"status" : { "totalMessagesReceived" : 190, "totalBytesReceived" : 98040, 
"averageMessageSizeBytes" : 516, "averageLatencyMs" : 449.0, "p50LatencyMs" : 
449, "p95LatencyMs" : 449, "p99LatencyMs" : 449 } },{code}
We will change it to show the status of every separate consumer and the topic 
partitions it was assigned to
{code:java}
"tasks" : { "consume_bench_19938" : { "state" : "DONE", "spec" : { "class" : 
"org.apache.kafka.trogdor.workload.ConsumeBenchSpec", 
...
"status" : { 
"consumer-1":
{
"assignedPartitions": ["foo1", "foo3"],
"totalMessagesReceived" : 190, "totalBytesReceived" : 98040, 
"averageMessageSizeBytes" : 516, "averageLatencyMs" : 449.0, "p50LatencyMs" : 
449, "p95LatencyMs" : 449, "p99LatencyMs" : 449 
}
"consumer-2":
{
"assignedPartitions": ["foo2", "foo4"],
"totalMessagesReceived" : 190, "totalBytesReceived" : 98040, 
"averageMessageSizeBytes" : 516, "averageLatencyMs" : 449.0, "p50LatencyMs" : 
449, "p95LatencyMs" : 449, "p99LatencyMs" : 449 
}
} 

},{code}
 

 
h2. 
Backwards Compatibility:

This change should be mostly backwards-compatible. If the `consumerThreads` is 
not passed - only one consumer will be created and the round-robin assignor 
will assign every partition to it.

The only change will be in the format of the reported status. Even with one 
consumer, we will still show a status similar to
{code:java}
"status" : { "consumer-1": { "assignedPartitions": ["foo1", "foo3"], 
"totalMessagesReceived" : 190, "totalBytesReceived" : 98040, 
"averageMessageSizeBytes" : 516, "averageLatencyMs" : 449.0, "p50LatencyMs" : 
449, "p95LatencyMs" : 449, "p99LatencyMs" : 449 }
}
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Jan Filipiak

Hi,

Currently MirrorMaker is usually run collocated with the target cluster. 
This is all nice and good. But one big obstacle in this was

always that group coordination happened on the source cluster. So when
then network was congested, you sometimes loose group membership and 
have to rebalance and all this.


So one big request from we would be the support of having coordination 
cluster != source cluster.


I would generally say a LAN is better than a WAN for doing group 
coordinaton and there is no reason we couldn't have a group consuming 
topics from a different cluster and committing offsets to another one right?


Other than that. It feels like the KIP has too much features where many 
of them are not really wanted and counter productive but I will just 
wait and see how the discussion goes.


Best Jan


On 15.10.2018 18:16, Ryanne Dolan wrote:

Hey y'all!

Please take a look at KIP-382:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0

Thanks for your feedback and support.

Ryanne



Build failed in Jenkins: kafka-trunk-jdk8 #3143

2018-10-16 Thread Apache Jenkins Server
See 

--
[...truncated 505 B...]
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 0ec25ebdf19c9adcec3b8c9e00b8af4da55c49d9
error: Could not read 4c9d49bd3bb8381e95bba4e3223c0d6a3c3c8e22
error: Could not read 4866c33ac309ba5cc098a02948253f55a83666a3
error: Could not read d6e84ec38ab359596f6021dc8cbf39bb07ad44e9
error: Could not read 84acf1a667700d6e7aa417d0a3a0c44b4eda1a26
error: Could not read 88823c6016ea2e306340938994d9e122abf3c6c0
remote: Enumerating objects: 2429, done.
remote: Counting objects:   0% (1/2429)   remote: Counting objects:   
1% (25/2429)   remote: Counting objects:   2% (49/2429)   
remote: Counting objects:   3% (73/2429)   remote: Counting objects:   
4% (98/2429)   remote: Counting objects:   5% (122/2429)   
remote: Counting objects:   6% (146/2429)   remote: Counting objects:   
7% (171/2429)   remote: Counting objects:   8% (195/2429)   
remote: Counting objects:   9% (219/2429)   remote: Counting objects:  
10% (243/2429)   remote: Counting objects:  11% (268/2429)   
remote: Counting objects:  12% (292/2429)   remote: Counting objects:  
13% (316/2429)   remote: Counting objects:  14% (341/2429)   
remote: Counting objects:  15% (365/2429)   remote: Counting objects:  
16% (389/2429)   remote: Counting objects:  17% (413/2429)   
remote: Counting objects:  18% (438/2429)   remote: Counting objects:  
19% (462/2429)   remote: Counting objects:  20% (486/2429)   
remote: Counting objects:  21% (511/2429)   remote: Counting objects:  
22% (535/2429)   remote: Counting objects:  23% (559/2429)   
remote: Counting objects:  24% (583/2429)   remote: Counting objects:  
25% (608/2429)   remote: Counting objects:  26% (632/2429)   
remote: Counting objects:  27% (656/2429)   remote: Counting objects:  
28% (681/2429)   remote: Counting objects:  29% (705/2429)   
remote: Counting objects:  30% (729/2429)   remote: Counting objects:  
31% (753/2429)   remote: Counting objects:  32% (778/2429)   
remote: Counting objects:  33% (802/2429)   remote: Counting objects:  
34% (826/2429)   remote: Counting objects:  35% (851/2429)   
remote: Counting objects:  36% (875/2429)   remote: Counting objects:  
37% (899/2429)   remote: Counting objects:  38% (924/2429)   
remote: Counting objects:  39% (948/2429)   remote: Counting objects:  
40% (972/2429)   remote: Counting objects:  41% (996/2429)   
remote: Counting objects:  42% (1021/2429)   remote: Counting objects:  
43% (1045/2429)   remote: Counting objects:  44% (1069/2429)   
remote: Counting objects:  45% (1094/2429)   remote: Counting objects:  
46% (1118/2429)   remote: Counting objects:  47% (1142/2429)   
remote: Counting objects:  48% (1166/2429)   remote: Counting objects:  
49% (1191/2429)   remote: Counting objects:  50% (1215/2429)   
remote: Counting objects:  51% (1239/2429)   remote: Counting objects:  
52% (1264/2429)   remote: Counting objects:  53% (1288/2429)   
remote: Counting objects:  54% (1312/2429)   remote: Counting objects:  
55% (1336/2429)   

[jira] [Created] (KAFKA-7513) Flaky test SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain

2018-10-16 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-7513:
-

 Summary: Flaky test 
SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain
 Key: KAFKA-7513
 URL: https://issues.apache.org/jira/browse/KAFKA-7513
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 2.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.1.0


Have seen this test fail quite a few times in PR builds (e.g. 
https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/123):

{code}
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.common.network.NetworkTestUtils.waitForChannelClose(NetworkTestUtils.java:114)
at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.createAndCheckClientConnectionFailure(SaslAuthenticatorFailureDelayTest.java:223)
at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.createAndCheckClientAuthenticationFailure(SaslAuthenticatorFailureDelayTest.java:212)
at 
org.apache.kafka.common.security.authenticator.SaslAuthenticatorFailureDelayTest.testInvalidPasswordSaslPlain(SaslAuthenticatorFailureDelayTest.java:115)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-16 Thread Eno Thereska
This update is much needed, thank you! Could you comment on the approach of
your method vs. using other open source tools like Uber's uReplicator or
the recently open-sourced Mirus from Salesforce? (
https://engineering.salesforce.com/open-sourcing-mirus-3ec2c8a38537). I
strongly believe Mirrormaker itself needs an upgrade, so I'm not
questioning that, but more on the technical side of the solution.

Thanks
Eno

On Mon, Oct 15, 2018 at 11:19 PM Ryanne Dolan  wrote:

> Rhys, thanks for your enthusiasm!
>
> In your example, us-west.us-east.us-central.us-west.topic is an invalid
> "remote topic" name because us-west appears twice. MM2 will not replicate
> us-east.us-central.us-west.topic into us-west a second time, because the
> source topic already has us-west in the prefix. This is what I mean by
> "cycle detection" -- cyclical replication does not result in infinite
> recursion.
>
> It's important to note that MM2 does NOT disallow these sort of cycles, it
> just knows how to deal with them properly.
>
> Also notice this is done at the topic level, not per record. The records
> don't need any special header or anything for this cycle detection
> mechanism to work.
>
> Thanks!
> Ryanne
>
> On Mon, Oct 15, 2018 at 3:40 PM McCaig, Rhys 
> wrote:
>
> > Hi Ryanne,
> >
> > This KIP is fantastic. It provides a great vision for how MirrorMaker
> > should evolve in the Kafka project.
> >
> > I have a question on cycle detection - In a scenario where I have 3
> > clusters replicating between each other, it seems it may be easy to
> > misconfigure the connectors if auto topic creation is turned on so that
> > records become replicated to increasingly longer topic names (until the
> > topic name limit is reached). Consider clusters us-west, us-central,
> > us-east:
> >
> > us-west: topic
> > us-central: us-west.topic
> > us-east: us-central.us-west.topic
> > us-west: us-east.us-central.us-west.topic
> > us-central: us-west.us-east.us-central.us-west.topic
> >
> > I’m not sure whether this scenario would actually justify implementing
> > additional measures to avoid such a configuration, rather than ensuring
> > that the documentation is clear on how to avoid such scenarios - would be
> > good to hear what others think on this.
> >
> > Excited to see the discussion on this one.
> >
> > Rhys
> >
> > > On Oct 15, 2018, at 9:16 AM, Ryanne Dolan 
> wrote:
> > >
> > > Hey y'all!
> > >
> > > Please take a look at KIP-382:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> > >
> > > Thanks for your feedback and support.
> > >
> > > Ryanne
> >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3142

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[colin] KAFKA-6764: Improve the whitelist command-line option for

[github] KAFKA-7080 and KAFKA-7222: Cleanup overlapping KIP changes (#5804)

[mjsax] KAFKA-7223: Suppression documentation (#5787)

[wangguoz] MINOR: Doc changes for KIP-312 (#5789)

--
[...truncated 2.35 MB...]
org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 

Re: KAFKA-6654 custom SSLContext

2018-10-16 Thread Rajini Sivaram
Hi Clement,

I think many of the previous discussions around custom SslFactory was to do
with config options (e.g. avoid cleartext password, configure alias to
choose certificate, OCSP support etc.) For these, alternative config
options make the most sense because these are generic config options that
are useful in many environments, so we should support these without having
to write Java code.

But perhaps in your case where you have an existing implementation you want
to reuse (or share an implementation across multiple applications),
customization of SslFactory would be the simplest approach. I would suggest
writing a KIP that describes the motivation and rejected alternatives to
move this forward.

Regards,

Rajini

On Tue, Oct 16, 2018 at 4:07 AM Colin McCabe  wrote:

> Hi Clement,
>
> Thanks for the clarification.  Perhaps a pluggable interface makes sense
> here.  Maybe someone more familiar with the SSL code can comment.
>
> best,
> Colin
>
>
> On Mon, Oct 15, 2018, at 19:53, Pellerin, Clement wrote:
> > OK, I can see why passing an instance is not language neutral.
> > All the libraries I can think of accept the SSLSocketFactory, but they
> > most likely don't support C or Python.
> >
> > My exact use case is to reuse the SSLContext configured in my
> > application outside Kafka.
> > I'm afraid no amount of extra configuration properties can achieve that.
> > It appears the creator of KAFKA-6654 agrees with me.
> >
> > I could solve my problem if I could convince SslChannelBuilder to create
> > my own SslFactory implementation.
> > The Kafka config already contains properties that hold class names.
> > Like I suggested before, we could have a property for the class name
> > that implements an SslFactory interface.
> > I would also need to pass custom config parameters to my SslFactory
> > implementation without causing warnings.
> > By default, the SslFactory implementation would be the one built into
> > Kafka which uses all the Kafka ssl properties.
> >
> > Is that acceptable to resolve KAFKA-6654?
> > Can you think of a better solution?
> >
> >
> > -Original Message-
> > From: Colin McCabe [mailto:cmcc...@apache.org]
> > Sent: Monday, October 15, 2018 7:58 PM
> > To: dev@kafka.apache.org
> > Subject: Re: KAFKA-6654 custom SSLContext
> >
> > In general Kafka makes an effort to be langauge-neutral so that Kafka
> > clients can be implemented on platforms other than Java.  For example,
> > we have things like librdkafka which allow people to access Kafka from C
> > or Python.  Unless I'm misunderstanding something, giving direct access
> > to the SSLContext and SSLSocketFactory seems like it would make that
> > kind of compatibility harder, if it were even still possible.  I'm
> > curious if there's a way to do this by adding configuration entries for
> > what you need?
> >
> > best,
> > Colin
> >
> >
> > On Mon, Oct 15, 2018, at 13:20, Pellerin, Clement wrote:
> > > I am new to this mailing list. I am not sure what I should do next.
> > > Should I create a KIP to discuss this?
> > >
> > > -Original Message-
> > > From: Pellerin, Clement
> > > Sent: Wednesday, October 10, 2018 4:38 PM
> > > To: dev@kafka.apache.org
> > > Subject: KAFKA-6654 custom SSLContext
> > >
> > > KAFKA-6654 correctly states that there will never be enough
> > > configuration parameters to fully configure the SSLContext/
> > > SSLSocketFactory created by Kafka.
> > > For example, in our case, we need an alias to choose the key in the
> > > keystore, and we need an implementation of OCSP.
> > > KAFKA-6654 suggests to make the creation of the SSLContext a pluggable
> > > implementation.
> > > Maybe by declaring an interface and passing the name of an
> > > implementation class in a new parameter.
> > >
> > > Many libraries solve this problem by accepting the SSLContextFactory
> > > instance from the application.
> > > How about passing the instance as the value of a runtime configuration
> > > parameter?
> > > If that parameter is set, all other ssl.* parameters would be ignored.
> > > Obviously, this parameter could only be set programmatically.
> > >
> > > I would like to hear the proposed solution by the Kafka maintainers.
> > >
> > > I can help implementing a patch if there is an agreement on the
> desired
> > > solution.
>


[jira] [Created] (KAFKA-7512) java.lang.ClassCastException: java.util.Date cannot be cast to java.lang.Number

2018-10-16 Thread Rohit Kumar Gupta (JIRA)
Rohit Kumar Gupta created KAFKA-7512:


 Summary: java.lang.ClassCastException: java.util.Date cannot be 
cast to java.lang.Number
 Key: KAFKA-7512
 URL: https://issues.apache.org/jira/browse/KAFKA-7512
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.0.0
Reporter: Rohit Kumar Gupta


Steps:

~~

bash-4.2# kafka-avro-console-producer --broker-list 10.75.103.242:9092 --topic 
connect_10oct_03 -property schema.registry.url=http://10.75.103.242:8081 
--property 
value.schema='\{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"},\{"name":"f2","type":["null",{"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}],"default":null}]}'
{"f1": "value1","f2": \{"null":null}}
{"f1": "value1","f2": \{"long":1022}}

 

bash-4.2# kafka-avro-console-producer --broker-list 10.75.103.242:9092 --topic 
connect_10oct_03 -property schema.registry.url=http://10.75.103.242:8081 
--property 
value.schema='\{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"},\{"name":"f2","type":["null",{"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}],"default":null},\{"name":"f3","type":"string","default":"green"}]}'
{"f1": "value1","f2": \{"null":null},"f3":"toto"}
{"f1": "value1","f2": \{"null":null},"f3":"toto"}
{"f1": "value1","f2": \{"null":null},"f3":"toto"}
{"f1": "value1","f2": \{"long":12343536},"f3":"tutu"}

 

bash-4.2# kafka-avro-console-producer --broker-list 10.75.103.242:9092 --topic 
connect_10oct_03 -property schema.registry.url=http://10.75.103.242:8081 
--property 
value.schema='\{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"},\{"name":"f2","type":["null",{"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}],"default":null}]}'
{"f1": "value1","f2": \{"null":null}}
{"f1": "value1","f2": \{"long":1022}}

 

bash-4.2# curl -X POST -H "Accept: application/json" -H "Content-Type: 
application/json" http://localhost:8083/connectors -d 
'\{"name":"hdfs-sink-connector-10oct-03", "config": 
{"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector", 
"tasks.max":"1", "topics":"connect_10oct_03", "hdfs.url": 
"hdfs://10.75.103.242:8020/tmp/", "flush.size":"1", "hive.integration": "true", 
"hive.metastore.uris": "thrift://10.75.103.242:9083", "hive.database": "rohit", 
"schema.compatibility": "BACKWARD"}}'
{"name":"hdfs-sink-connector-10oct-03","config":\{"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector","tasks.max":"1","topics":"connect_10oct_03","hdfs.url":"hdfs://10.75.103.242:8020/tmp/","flush.size":"1","hive.integration":"true","hive.metastore.uris":"thrift://10.75.103.242:9083","hive.database":"rohit","schema.compatibility":"BACKWARD","name":"hdfs-sink-connector-10oct-03"},"tasks":[],"type":null}bash-4.2#
bash-4.2#

 

bash-4.2# curl 
http://localhost:8083/connectors/hdfs-sink-connector-10oct-03/status
{"name":"hdfs-sink-connector-10oct-03","connector":\{"state":"RUNNING","worker_id":"10.75.103.242:8083"},"tasks":[\{"state":"FAILED","trace":"org.apache.kafka.connect.errors.ConnectException:
 Exiting WorkerSinkTask due to unrecoverable exception.\n\tat 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)\n\tat
 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat
 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)\n\tat
 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat
 org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat 
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
 java.lang.Thread.run(Thread.java:748)\nCaused by: 
java.lang.ClassCastException: java.util.Date cannot be cast to 
java.lang.Number\n\tat 
org.apache.kafka.connect.data.SchemaProjector.projectPrimitive(SchemaProjector.java:164)\n\tat
 
org.apache.kafka.connect.data.SchemaProjector.projectRequiredSchema(SchemaProjector.java:91)\n\tat
 
org.apache.kafka.connect.data.SchemaProjector.project(SchemaProjector.java:73)\n\tat
 
org.apache.kafka.connect.data.SchemaProjector.projectStruct(SchemaProjector.java:110)\n\tat
 
org.apache.kafka.connect.data.SchemaProjector.projectRequiredSchema(SchemaProjector.java:93)\n\tat
 
org.apache.kafka.connect.data.SchemaProjector.project(SchemaProjector.java:73)\n\tat
 

Build failed in Jenkins: kafka-2.1-jdk8 #26

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7080 and KAFKA-7222: Cleanup overlapping KIP changes (#5804)

[matthias] KAFKA-7223: Suppression documentation (#5787)

[wangguoz] MINOR: Doc changes for KIP-312 (#5789)

--
[...truncated 437.71 KB...]
return valueSerde != null ? valueSerde.serializer() : null;
 ^
  required: Serializer
  found:Serializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:68:
 warning: [unchecked] unchecked conversion
return  valueSerde != null ? valueSerde.deserializer() : null;
^
  required: Deserializer
  found:Deserializer
  where V is a type-variable:
V extends Object declared in class OptimizableRepartitionNode
:78:
 warning: [unchecked] unchecked conversion
final Serializer keySerializer = keySerde != null ? 
keySerde.serializer() : null;

  ^
  required: Serializer
  found:Serializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:79:
 warning: [unchecked] unchecked conversion
final Deserializer keyDeserializer = keySerde != null ? 
keySerde.deserializer() : null;

^
  required: Deserializer
  found:Deserializer
  where K is a type-variable:
K extends Object declared in class OptimizableRepartitionNode
:131:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:113:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
timeTo);
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:91:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
   ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:70:
 warning: [deprecation] fetchAll(long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetchAll(final long 
timeFrom, final long timeTo) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:60:
 warning: [deprecation] fetch(K,K,long,long) in ReadOnlyWindowStore has been 
deprecated
public KeyValueIterator, byte[]> fetch(final Bytes keyFrom, 
final Bytes keyTo, final long from, final long to) {
 ^
  where K,V are type-variables:
K extends Object declared in interface ReadOnlyWindowStore
V extends Object declared in interface ReadOnlyWindowStore
:55:
 warning: [deprecation] fetch(K,long,long) in ReadOnlyWindowStore has been 
deprecated
public WindowStoreIterator fetch(final Bytes key, final long 

Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-16 Thread Jungtaek Lim
Congrats Mani!
On Tue, 16 Oct 2018 at 1:45 PM Abhimanyu Nagrath 
wrote:

> Congratulations Manikumar
>
> On Tue, Oct 16, 2018 at 10:09 AM Satish Duggana 
> wrote:
>
> > Congratulations Mani!
> >
> >
> > On Fri, Oct 12, 2018 at 9:41 PM Colin McCabe  wrote:
> > >
> > > Congratulations, Manikumar!  Well done.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Oct 12, 2018, at 01:25, Edoardo Comar wrote:
> > > > Well done Manikumar !
> > > > --
> > > >
> > > > Edoardo Comar
> > > >
> > > > IBM Event Streams
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > >
> > > >
> > > >
> > > > From:   "Matthias J. Sax" 
> > > > To: dev 
> > > > Cc: users 
> > > > Date:   11/10/2018 23:41
> > > > Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy
> > > >
> > > >
> > > >
> > > > Congrats!
> > > >
> > > >
> > > > On 10/11/18 2:31 PM, Yishun Guan wrote:
> > > > > Congrats Manikumar!
> > > > > On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
> > > > >  wrote:
> > > > >>
> > > > >> Great news, congratulations Manikumar!!
> > > > >>
> > > > >> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian
> > > > 
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats Manikumar!
> > > > >>>
> > > > >>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan <
> > ryannedo...@gmail.com>
> > > > >>> wrote:
> > > > >>>
> > > >  Bravo!
> > > > 
> > > >  On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma 
> > > > wrote:
> > > > 
> > > > > Congratulations Manikumar! Thanks for your continued
> > contributions.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson
> > > > 
> > > > > wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> The PMC for Apache Kafka has invited Manikumar Reddy as a
> > committer
> > > > >>> and
> > > > > we
> > > > >> are
> > > > >> pleased to announce that he has accepted!
> > > > >>
> > > > >> Manikumar has contributed 134 commits including significant
> > work to
> > > > >>> add
> > > > >> support for delegation tokens in Kafka:
> > > > >>
> > > > >> KIP-48:
> > > > >>
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > > >
> > > > >> KIP-249
> > > > >> <
> > > > >
> > > > 
> > > > >>>
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > > >
> > > > >>
> > > > >> :
> > > > >>
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > > >
> > > > >>
> > > > >> He has broad experience working with many of the core
> > components in
> > > >  Kafka
> > > > >> and he has reviewed over 80 PRs. He has also made huge
> progress
> > > > > addressing
> > > > >> some of our technical debt.
> > > > >>
> > > > >> We appreciate the contributions and we are looking forward to
> > more.
> > > > >> Congrats Manikumar!
> > > > >>
> > > > >> Jason, on behalf of the Apache Kafka PMC
> > > > >>
> > > > >
> > > > 
> > > > >>>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Sönke Liebau
> > > > >> Partner
> > > > >> Tel. +49 179 7940878
> > > > >> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
> > Germany
> > > >
> > > > [attachment "signature.asc" deleted by Edoardo Comar/UK/IBM]
> > > >
> > > >
> > > > Unless stated otherwise above:
> > > > IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > > 741598.
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > 3AU
> >
>


Throwing away prefetched records optimisation.

2018-10-16 Thread Zahari Dichev
Hi there Kafka developers,

I am currently trying to find a solution to an issue that has been
manifesting itself in the Akka streams implementation of the Kafka
connector. When it comes to consuming messages, the implementation relies
heavily on the fact that we can pause and resume partitions. In some
situations when a single consumer instance is shared among several streams,
we might end up with frequently pausing and unpausing a set of topic
partitions, which is the main facility that allows us to implement back
pressure. This however has certain disadvantages, especially when there are
two consumers that differ in terms of processing speed.

To articulate the issue more clearly, imagine that a consumer maintains
assignments for two topic partitions *TP1* and *TP2*. This consumer is
shared by two streams - S1 and S2. So effectively when we have demand from
only one of the streams - *S1*, we will pause one of the topic partitions
*TP2* and call *poll()* on the consumer to only retrieve the records for
the demanded topic partition - *TP1*. The result of that is all the records
that have been prefetched for *TP2* are now thrown away by the fetcher ("*Not
returning fetched records for assigned partition TP2 since it is no longer
fetchable"*). If we extrapolate that to multiple streams sharing the same
consumer, we might quickly end up in a situation where we throw prefetched
data quite often. This does not seem like the most efficient approach and
in fact produces quite a lot of overlapping fetch requests as illustrated
in the following issue:

https://github.com/akka/alpakka-kafka/issues/549

I am writing this email to get some initial opinion on a KIP I was thinking
about. What if we give the clients of the Consumer API a bit more control
of what to do with this prefetched data. Two options I am wondering about:

1. Introduce a configuration setting, such as*
"return-prefetched-data-for-paused-topic-partitions = false"* (have to
think of a better name), which when set to true will return what is
prefetched instead of throwing it away on calling *poll()*. Since this is
amount of data that is bounded by the maximum size of the prefetch, we can
control what is the most amount of records returned. The client of the
consumer API can then be responsible for keeping that data around and use
it when appropriate (i.e. when demand is present)

2. Introduce a facility to pass in a buffer into which the prefetched
records are drained when poll is called and paused partitions have some
prefetched records.

Any opinions on the matter are welcome. Thanks a lot !

Zahari Dichev


[jira] [Created] (KAFKA-7511) Fetch offset 2287878 is out of range for partition CommitTrn-0, resetting offset

2018-10-16 Thread Hitesh Kumar (JIRA)
Hitesh Kumar created KAFKA-7511:
---

 Summary:  Fetch offset 2287878 is out of range for partition 
CommitTrn-0, resetting offset
 Key: KAFKA-7511
 URL: https://issues.apache.org/jira/browse/KAFKA-7511
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.0.0
Reporter: Hitesh Kumar


[Consumer clientId=consumer-1, groupId=CommitTrn] Fetch offset 2287878 is out 
of range for partition CommitTrn-0, resetting offset
903 --- [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] 
o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-1, 
groupId=CommitTrn] Fetch offset 2287878 is out of range for partition 
CommitTrn-0, resetting offset



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #3141

2018-10-16 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #36

2018-10-16 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR: Doc changes for KIP-324 (#5788)

[mjsax] MINOR: doc changes for KIP-372 (#5790)

[colin] KAFKA-6764: Improve the whitelist command-line option for

[github] KAFKA-7080 and KAFKA-7222: Cleanup overlapping KIP changes (#5804)

[mjsax] KAFKA-7223: Suppression documentation (#5787)

[wangguoz] MINOR: Doc changes for KIP-312 (#5789)

--
[...truncated 1.84 MB...]

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation STARTED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED