Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-13 Thread Manikumar
+1 (non-binding)

Thanks for the KIP.

On Thu, Sep 13, 2018 at 10:38 AM Harsha  wrote:

> +1 (Binding).
> Thanks,
> Harsha
>
> On Wed, Sep 12, 2018, at 9:06 PM, vito jeng wrote:
> > +1
> >
> >
> >
> > ---
> > Vito
> >
> > On Mon, Sep 10, 2018 at 4:52 PM, Dongjin Lee  wrote:
> >
> > > +1. (Non-binding)
> > >
> > > On Mon, Sep 10, 2018 at 4:13 AM Matthias J. Sax  >
> > > wrote:
> > >
> > > > Thanks a lot for the KIP.
> > > >
> > > > +1 (binding)
> > > >
> > > >
> > > > -Matthias
> > > >
> > > >
> > > > On 9/8/18 11:27 AM, Chia-Ping Tsai wrote:
> > > > > Hi All,
> > > > >
> > > > > I'd like to put KIP-367 to the vote.
> > > > >
> > > > >
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=89070496
> > > > >
> > > > > --
> > > > > Chia-Ping
> > > > >
> > > >
> > > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > > *github:  github.com/dongjinleekr
> > > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > > slideshare:
> > > www.slideshare.net/dongjinleekr
> > > *
> > >
>


[jira] [Resolved] (KAFKA-7394) Allow OffsetsForLeaderEpoch requests with topic describe ACL (KIP-320)

2018-09-13 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7394.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Allow OffsetsForLeaderEpoch requests with topic describe ACL (KIP-320)
> --
>
> Key: KAFKA-7394
> URL: https://issues.apache.org/jira/browse/KAFKA-7394
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.1.0
>
>
> As part of KIP-320, we will allow the OffsetsForLeaderEpoch request to be 
> sent from clients. Currently this API is protected with the cluster resource. 
> We need to extend support to also allow Topic Describe.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: A question about kafka streams API

2018-09-13 Thread Yui Yoi
Hi Adam and John, thank you for your effort!
We are implementing full idem-potency in our projects so that's nothing to
worry about.
As to what John said - we only have one partition, I personally assured
that.
So as i wrote in section 2. of my first message in this conversation - my
stream should have processed the "asd" message again because it is not
committed yet.
That's why i suspect it has something to do with the stream's cache; maybe
something like:
1. "asd" got processed and restored in cache
2. "{}" got processed and cached too.
3. commit interval makes the stream commit the offset of "{}"


B.t.w
If you want to run my application you should:
1. open it in some java editor as maven project
2. run it as a normal java application
3. setup kafka server & zookeeper on localhost
4. then you can send the above messages via cli

John - even if you send "asd1", "asd2", "asd3" you will see in the logs
that my app takes the latest each time

Of course that's far beyond what i can ask from you guys to do, thanks a
lot for your help.

On Wed, Sep 12, 2018 at 8:14 PM John Roesler  wrote:

> Hi!
>
> As Adam said, if you throw an exception during processing, it should cause
> Streams to shut itself down and *not* commit that message. Therefore, when
> you start up again, it should again attempt to process that same message
> (and shut down again).
>
> Within a single partition, messages are processed in order, so a bad
> message will block the queue, and you should not see subsequent messages
> get processed.
>
> However, if your later message "{}" goes to a different partition than the
> bad message, then there's no relationship between them, and the later,
> good, message might get processed.
>
> Does that help?
> -John
>
> On Wed, Sep 12, 2018 at 8:38 AM Adam Bellemare 
> wrote:
>
> > Hi Yui Yoi
> >
> >
> > Keep in mind that Kafka Consumers don't traditionally request only a
> single
> > message at a time, but instead requests them in batches. This allows for
> > much higher throughput, but does result in the scenario of
> "at-least-once"
> > processing. Generally what will happen in this scenario is the following:
> >
> > 1) Client requests the next set of messages from offset (t). For example,
> > assume it gets 10 messages and message 6 is "bad".
> > 2) The client's processor will then process the messages one at a time.
> > Note that the offsets are not committed after the message is processed,
> but
> > only at the end of the batch.
> > 3) The bad message it hit by the processor. At this point you can decide
> to
> > skip the message, throw an exception, etc.
> > 4a) If you decide to skip the message, processing will continue. Once all
> > 10 messages are processed, the new offset (t+10) offset is committed back
> > to Kafka.
> > 4b) If you decide to throw an exception and terminate your app, you will
> > have still processed the messages that came before the bad message.
> Because
> > the offset (t+10) is not committed, the next time you start the app it
> will
> > consume from offset t, and those messages will be processed again. This
> is
> > "at-least-once" processing.
> >
> >
> > Now, if you need exactly-once processing, you have two choices -
> > 1) Use Kafka Streams with exactly-once semantics (though, as I am not
> > familiar with your framework, it may support it as well).
> > 2) Use idempotent practices (ie: it doesn't matter if the same messages
> get
> > processed more than once).
> >
> >
> > Hope this helps -
> >
> > Adam
> >
> >
> > On Wed, Sep 12, 2018 at 7:59 AM, Yui Yoi  wrote:
> >
> > > Hi Adam,
> > > Thanks a lot for the rapid response, it did helped!
> > >
> > > Let me though ask one more simple question: Can I make a stream
> > application
> > > stuck on an invalid message? and not consuming any further messages?
> > >
> > > Thanks again
> > >
> > > On Wed, Sep 12, 2018 at 2:35 PM Adam Bellemare <
> adam.bellem...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Yui Yoi
> > > >
> > > > Preface: I am not familiar with the spring framework.
> > > >
> > > > "Earliest" when it comes to consuming from Kafka means, "Start
> reading
> > > from
> > > > the first message in the topic, *if there is no offset stored for
> that
> > > > consumer group*". It sounds like you are expecting it to re-read each
> > > > message whenever a new message comes in. This is not going to happen,
> > as
> > > > there will be a committed offset and "earliest" will no longer be
> used.
> > > If
> > > > you were to use "latest" instead, if a consumer is started that does
> > not
> > > > have a valid offset, it would use the very latest message in the
> topic
> > as
> > > > the starting offset for message consumption.
> > > >
> > > > Now, if you are using the same consumer group each time you run the
> > > > application (which it seems is true, as you have "test-group"
> hardwired
> > > in
> > > > your application.yml), but you do not tear down your local cluster
> and
> > > > clear out its state, you will indeed 

Build failed in Jenkins: kafka-trunk-jdk8 #2962

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: KafkaAdminClient Java 8 code cleanup (#5594)

[ismael]  KAFKA-6926: Simplified some logic to eliminate some suppressions of

--
[...truncated 2.69 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED


[jira] [Resolved] (KAFKA-2139) Add a separate controller messge queue with higher priority on broker side

2018-09-13 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2139.
--
Resolution: Duplicate

Closing this in-favor of KAFKA-4453/KIP-291.

> Add a separate controller messge queue with higher priority on broker side 
> ---
>
> Key: KAFKA-2139
> URL: https://issues.apache.org/jira/browse/KAFKA-2139
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>
> This ticket is supposed to be working together with KAFKA-2029. 
> There are two issues with current controller to broker messages.
> 1. On the controller side the message are sent without synchronization.
> 2. On broker side the controller messages share the same queue as client 
> messages.
> The problem here is that brokers process the controller messages for the same 
> partition at different times and the variation could be big. This causes 
> unnecessary data loss and prolong the preferred leader election / controlled 
> shutdown/ partition reassignment, etc.
> KAFKA-2029 was trying to add a boundary between messages for different 
> partitions. For example, before leader migration for previous partition 
> finishes, the leader migration for next partition won't begin.
> This ticket is trying to let broker process controller messages faster. So 
> the idea is have separate queue to hold controller messages, if there are 
> controller messages, KafkaApi thread will first take care of those messages, 
> otherwise it will proceed messages from clients.
> Those two tickets are not ultimate solution to current controller problems, 
> but just mitigate them with minor code changes. Moving forward, we still need 
> to think about rewriting controller in a cleaner way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #481

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7394; OffsetsForLeaderEpoch supports topic describe access 
(#5634)

--
[...truncated 2.22 MB...]
org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task 

[jira] [Resolved] (KAFKA-1825) leadership election state is stale and never recovers without all brokers restarting

2018-09-13 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1825.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists on newer 
versions.

> leadership election state is stale and never recovers without all brokers 
> restarting
> 
>
> Key: KAFKA-1825
> URL: https://issues.apache.org/jira/browse/KAFKA-1825
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1, 0.8.2.0
>Reporter: Joe Stein
>Priority: Critical
> Attachments: KAFKA-1825.executable.tgz
>
>
> I am not sure what is the cause here but I can succinctly and repeatedly  
> reproduce this issue. I tried with 0.8.1.1 and 0.8.2-beta and both behave in 
> the same manner.
> The code to reproduce this is here 
> https://github.com/stealthly/go_kafka_client/tree/wipAsyncSaramaProducer/producers
> scenario 3 brokers, 1 zookeeper, 1 client (each AWS c3.2xlarge instances)
> create topic 
> producer client sends in 380,000 messages/sec (attached executable)
> everything is fine until you kill -SIGTERM broker #2 
> then at that point the state goes bad for that topic.  even trying to use the 
> console producer (with the sarama producer off) doesn't work.
> doing a describe the yoyoma topic looks fine, ran prefered leadership 
> election lots of issues... still can't produce... only resolution is bouncing 
> all brokers :(
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# bin/kafka-topics.sh 
> --zookeeper 10.218.189.234:2181 --describe
> Topic:yoyoma  PartitionCount:36   ReplicationFactor:3 Configs:
>   Topic: yoyoma   Partition: 0Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 1Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 2Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 3Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 4Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 5Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 6Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 7Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 8Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 9Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 10   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 11   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 12   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 13   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 14   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 15   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 16   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 17   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 18   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 19   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 20   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 21   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 22   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 23   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 24   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 25   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 26   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 27   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 28   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 29   Leader: 1   Replicas: 3,2,1 Isr: 1,3
>   Topic: yoyoma   Partition: 30   Leader: 1   Replicas: 1,2,3 Isr: 1,3
>   Topic: yoyoma   Partition: 31   Leader: 1   Replicas: 2,3,1 Isr: 1,3
>   Topic: yoyoma   Partition: 32   Leader: 1   Replicas: 3,1,2 Isr: 1,3
>   Topic: yoyoma   Partition: 33   Leader: 1   Replicas: 1,3,2 Isr: 1,3
>   Topic: yoyoma   Partition: 34   Leader: 1   Replicas: 2,1,3 Isr: 1,3
>   Topic: yoyoma   Partition: 35   Leader: 1   Replicas: 3,2,1 Isr: 1,3
> root@ip-10-233-52-139:/opt/kafka_2.10-0.8.1.1# 
> bin/kafka-preferred-replica-election.sh --zookeeper 10.218.189.234:2181
> Successfully started preferred replica election for partitions 
> Set([yoyoma,29], [yoyoma,14], [yoyoma,22], [yoyoma,15], 

Build failed in Jenkins: kafka-trunk-jdk10 #480

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: KafkaAdminClient Java 8 code cleanup (#5594)

[ismael]  KAFKA-6926: Simplified some logic to eliminate some suppressions of

--
[...truncated 2.22 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 

Re: [DISCUSS] KIP-372: Naming Joins and Grouping

2018-09-13 Thread Eno Thereska
Hi folks,

I know we don't normally have a "Related work" section in KIPs, but
sometimes I find it useful to see what others have done in similar cases.
Since this will be important for rolling re-deployments, I wonder what
other frameworks like Flink (or Samza) have done in these cases. Perhaps
they have done nothing, in which case it's fine to do this from first
principles, but IMO it would be good to know just to make sure we're
heading in the right direction.

Also I don't get a good feel for how much work this will be for an end user
who is doing the rolling deployment, perhaps an end-to-end example would
help.

Thanks
Eno

On Thu, Sep 13, 2018 at 6:22 AM, Matthias J. Sax 
wrote:

> Follow up comments:
>
> 1) We should either use `[app-id]-this|other-[join-name]-repartition` or
> `app-id]-[join-name]-left|right-repartition` but we should not change
> the pattern depending if the user specifies a name of not. I am fine
> with both patterns---just want to make sure with stick with one.
>
> 2) I didn't see why we would need to do this in this KIP. KIP-307 seems
> to be orthogonal, and thus KIP-372 should not change any processor
> names, but KIP-307 should define a holistic strategy for all processor.
> Otherwise, we might up with different strategies or revert what we
> decide in this KIP if it's not compatible with KIP-307.
>
>
> -Matthias
>
>
> On 9/12/18 6:28 PM, Guozhang Wang wrote:
> > Hello Bill,
> >
> > I made a pass over your proposal and here are some questions:
> >
> > 1. For Joined names, the current proposal is to define the repartition
> > topic names as
> >
> > * [app-id]-this-[join-name]-repartition
> >
> > * [app-id]-other-[join-name]-repartition
> >
> >
> > And if [join-name] not specified, stay the same, which is:
> >
> > * [previous-processor-name]-repartition for both Stream-Stream (S-S)
> join
> > and S-T join
> >
> > I think it is more natural to rename it to
> >
> > * [app-id]-[join-name]-left-repartition
> >
> > * [app-id]-[join-name]-right-repartition
> >
> >
> > 2. I'd suggest to use the name to also define the corresponding processor
> > names accordingly, in addition to the repartition topic names. Note that
> > for joins, this may be overlapping with KIP-307
> >  307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL>
> > as
> > it also have proposals for defining processor names for join operators as
> > well.
> >
> > 3. Could you also specify how this would affect the optimization for
> > merging multiple repartition topics?
> >
> > 4. In the "Compatibility, Deprecation, and Migration Plan" section, could
> > you also mention the following scenarios, if any of the upgrade path
> would
> > be changed:
> >
> >  a) changing user DSL code: under which scenarios users can now do a
> > rolling bounce instead of resetting applications.
> >
> >  b) upgrading from older version to new version, with all the names
> > specified, and with optimization turned on. E.g. say we have the code
> > written in 2.1 with all names specified, and now upgrading to 2.2 with
> new
> > optimizations that may potentially change the repartition topics. Is that
> > always safe to do?
> >
> >
> >
> > Guozhang
> >
> >
> > On Wed, Sep 12, 2018 at 4:52 PM, Bill Bejeck  wrote:
> >
> >> All I'd like to start a discussion on KIP-372 for the naming of joins
> and
> >> grouping operations in Kafka Streams.
> >>
> >> The KIP page can be found here:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 372%3A+Naming+Joins+and+Grouping
> >>
> >> I look forward to feedback and comments.
> >>
> >> Thanks,
> >> Bill
> >>
> >
> >
> >
>
>


[jira] [Resolved] (KAFKA-2219) reassign partition fails with offset 0 being asked for but obviously not there

2018-09-13 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2219.
--
Resolution: Auto Closed

Closing inactive issue.  Please reopen if the issue still exists in newer 
versions.

> reassign partition fails with offset 0 being asked for but obviously not there
> --
>
> Key: KAFKA-2219
> URL: https://issues.apache.org/jira/browse/KAFKA-2219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Joe Stein
>Priority: Critical
>
> may be related to there being no data anymore in the partition
> [2015-05-23 15:51:05,506]  122615762 [request-expiration-task] ERROR 
> kafka.server.KafkaApis  - [KafkaApi-10206101] Error when processing fetch 
> request for partition [cs.sensor.wrapped,44] offset 0 from fo
> llower with correlation id 3
> kafka.common.OffsetOutOfRangeException: Request for offset 0 but we only have 
> log segments in the range 1339916216 to 1339916216.
> at kafka.log.Log.read(Log.scala:380)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:722)
> This topic was stopped from having data produced to it, wait for the log 
> clean up (so no data in partitions) and then reassign. it also maybe related 
> to some brokers being an bad state



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-1509) Restart of destination broker after unreplicated partition move leaves partitions without leader

2018-09-13 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1509.
--
Resolution: Auto Closed

Not able reproduce on new versions. might have fixed in newer versions. Please 
reopen if the issue still exists in newer versions.

. 

> Restart of destination broker after unreplicated partition move leaves 
> partitions without leader
> 
>
> Key: KAFKA-1509
> URL: https://issues.apache.org/jira/browse/KAFKA-1509
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Albert Strasheim
>Priority: Major
>  Labels: newbie++
> Attachments: controller2.log
>
>
> This should be reasonably easy to reproduce.
> Make a Kafka cluster with a few machines.
> Create a topic with partitions on these machines. No replication.
> Bring up one more Kafka node.
> Move some or all of the partitions onto this new broker:
> kafka-reassign-partitions.sh --generate --zookeeper zk:2181 
> --topics-to-move-json-file move.json --broker-list 
> kafka-reassign-partitions.sh --zookeeper 36cfqd1.in.cfops.it:2181 
> --reassignment-json-file reassign.json --execute
> Wait until broker is the leader for all the partitions you moved.
> Send some data to the partitions. It all works.
> Shut down the broker that just received the data. Start it back up.
>  
> {code}
> Topic:testPartitionCount:2ReplicationFactor:1 Configs:
>   Topic: test Partition: 0Leader: -1  Replicas: 7 Isr: 
>   Topic: test Partition: 1Leader: -1  Replicas: 7 Isr: 
> {code}
> Leader for topic test never gets elected even though this node is the only 
> node that knows about the topic.
> Some logs:
> {code}
> Jun 26 23:18:07 localhost kafka: INFO [Socket Server on Broker 7], Started 
> (kafka.network.SocketServer)
> Jun 26 23:18:07 localhost kafka: INFO [Socket Server on Broker 7], Started 
> (kafka.network.SocketServer)
> Jun 26 23:18:07 localhost kafka: INFO [ControllerEpochListener on 7]: 
> Initialized controller epoch to 53 and zk version 52 
> (kafka.controller.ControllerEpochListener)
> Jun 26 23:18:07 localhost kafka: INFO Will not load MX4J, mx4j-tools.jar is 
> not in the classpath (kafka.utils.Mx4jLoader$)
> Jun 26 23:18:07 localhost kafka: INFO Will not load MX4J, mx4j-tools.jar is 
> not in the classpath (kafka.utils.Mx4jLoader$)
> Jun 26 23:18:07 localhost kafka: INFO [Controller 7]: Controller starting up 
> (kafka.controller.KafkaController)
> Jun 26 23:18:07 localhost kafka: INFO conflict in /controller data: 
> {"version":1,"brokerid":7,"timestamp":"1403824687354"} stored data: 
> {"version":1,"brokerid":4,"timestamp":"1403297911725"} (kafka.utils.ZkUtils$)
> Jun 26 23:18:07 localhost kafka: INFO conflict in /controller data: 
> {"version":1,"brokerid":7,"timestamp":"1403824687354"} stored data: 
> {"version":1,"brokerid":4,"timestamp":"1403297911725"} (kafka.utils.ZkUtils$)
> Jun 26 23:18:07 localhost kafka: INFO [Controller 7]: Controller startup 
> complete (kafka.controller.KafkaController)
> Jun 26 23:18:07 localhost kafka: INFO Registered broker 7 at path 
> /brokers/ids/7 with address xxx:9092. (kafka.utils.ZkUtils$)
> Jun 26 23:18:07 localhost kafka: INFO Registered broker 7 at path 
> /brokers/ids/7 with address xxx:9092. (kafka.utils.ZkUtils$)
> Jun 26 23:18:07 localhost kafka: INFO [Kafka Server 7], started 
> (kafka.server.KafkaServer)
> Jun 26 23:18:07 localhost kafka: INFO [Kafka Server 7], started 
> (kafka.server.KafkaServer)
> Jun 26 23:18:07 localhost kafka: TRACE Broker 7 cached leader info 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:14,ControllerEpoch:53),ReplicationFactor:1),AllReplicas:3)
>  for partition [requests,0] in response to UpdateMetadata request sent by 
> controller 4 epoch 53 with correlation id 70 (state.change.logger)
> Jun 26 23:18:07 localhost kafka: TRACE Broker 7 cached leader info 
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:11,ControllerEpoch:53),ReplicationFactor:1),AllReplicas:1)
>  for partition [requests,13] in response to UpdateMetadata request sent by 
> controller 4 epoch 53 with correlation id 70 (state.change.logger)
> Jun 26 23:18:07 localhost kafka: TRACE Broker 7 cached leader info 
> (LeaderAndIsrInfo:(Leader:1,ISR:1,LeaderEpoch:4,ControllerEpoch:53),ReplicationFactor:2),AllReplicas:1,5)
>  for partition [requests_ipv6,5] in response to UpdateMetadata request sent 
> by controller 4 epoch 53 with correlation id 70 (state.change.logger)
> Jun 26 23:18:07 localhost kafka: TRACE Broker 7 cached leader info 
> (LeaderAndIsrInfo:(Leader:4,ISR:4,LeaderEpoch:13,ControllerEpoch:53),ReplicationFactor:2),AllReplicas:4,5)
>  for partition [requests_stored,7] in response to UpdateMetadata request sent 
> by controller 4 

Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-13 Thread Mickael Maison
+1 (non-binding)
Thanks!
On Thu, Sep 13, 2018 at 7:05 AM Manikumar  wrote:
>
> +1 (non-binding)
>
> Thanks for the KIP.
>
> On Thu, Sep 13, 2018 at 10:38 AM Harsha  wrote:
>
> > +1 (Binding).
> > Thanks,
> > Harsha
> >
> > On Wed, Sep 12, 2018, at 9:06 PM, vito jeng wrote:
> > > +1
> > >
> > >
> > >
> > > ---
> > > Vito
> > >
> > > On Mon, Sep 10, 2018 at 4:52 PM, Dongjin Lee  wrote:
> > >
> > > > +1. (Non-binding)
> > > >
> > > > On Mon, Sep 10, 2018 at 4:13 AM Matthias J. Sax  > >
> > > > wrote:
> > > >
> > > > > Thanks a lot for the KIP.
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > >
> > > > > On 9/8/18 11:27 AM, Chia-Ping Tsai wrote:
> > > > > > Hi All,
> > > > > >
> > > > > > I'd like to put KIP-367 to the vote.
> > > > > >
> > > > > >
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=89070496
> > > > > >
> > > > > > --
> > > > > > Chia-Ping
> > > > > >
> > > > >
> > > > >
> > > >
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > >
> > > > *github:  github.com/dongjinleekr
> > > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > > slideshare:
> > > > www.slideshare.net/dongjinleekr
> > > > *
> > > >
> >


Re: [VOTE] KIP-110: Add Codec for ZStandard Compression

2018-09-13 Thread Mickael Maison
+1 (non binding)
Thanks for your perseverance, it's great to see this KIP finally
reaching completion!
On Thu, Sep 13, 2018 at 4:58 AM Harsha  wrote:
>
> +1 (binding).
>
> Thanks,
> Harsha
>
> On Wed, Sep 12, 2018, at 4:56 PM, Jason Gustafson wrote:
> > Great contribution! +1
> >
> > On Wed, Sep 12, 2018 at 10:20 AM, Manikumar 
> > wrote:
> >
> > > +1 (non-binding).
> > >
> > > Thanks for the KIP.
> > >
> > > On Wed, Sep 12, 2018 at 10:44 PM Ismael Juma  wrote:
> > >
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > Ismael
> > > >
> > > > On Wed, Sep 12, 2018 at 10:02 AM Dongjin Lee  wrote:
> > > >
> > > > > Hello, I would like to start a VOTE on KIP-110: Add Codec for 
> > > > > ZStandard
> > > > > Compression.
> > > > >
> > > > > The KIP:
> > > > >
> > > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 110%3A+Add+Codec+for+ZStandard+Compression
> > > > > Discussion thread:
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg88673.html
> > > > >
> > > > > Thanks,
> > > > > Dongjin
> > > > >
> > > > > --
> > > > > *Dongjin Lee*
> > > > >
> > > > > *A hitchhiker in the mathematical world.*
> > > > >
> > > > > *github:  github.com/dongjinleekr
> > > > > linkedin:
> > > > kr.linkedin.com/in/dongjinleekr
> > > > > slideshare:
> > > > > www.slideshare.net/dongjinleekr
> > > > > *
> > > > >
> > > >
> > >


[jira] [Created] (KAFKA-7407) Kafka Connect validation ignores errors for undefined configs

2018-09-13 Thread Andrey Pustovetov (JIRA)
Andrey Pustovetov created KAFKA-7407:


 Summary: Kafka Connect validation ignores errors for undefined 
configs
 Key: KAFKA-7407
 URL: https://issues.apache.org/jira/browse/KAFKA-7407
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.1
Reporter: Andrey Pustovetov


*Test case*

Perform validation of a custom connector that defines {{ConfigDef}} with 
undefined dependent configs, i.e. it uses {{ConfigDef.define()}} method where 
the config that is specified in {{dependents}} argument is missing in this 
{{ConfigDef}}.

AR: validation successful
ER: validation failed with the error: {{Configuration is not defined}}

*Description*

There are two places where undefined configs can be obtained:
# The undefined dependent configs can be returned here: 
{{AbstractHerder.validateBasicConnectorConfig()}} -> {{ConfigDef.validateAll()}}
{code:java}
List undefinedConfigKeys = undefinedDependentConfigs();
for (String undefinedConfigKey: undefinedConfigKeys) {
ConfigValue undefinedConfigValue = new ConfigValue(undefinedConfigKey);
undefinedConfigValue.addErrorMessage(undefinedConfigKey + " is referred in 
the dependents, but not defined.");
undefinedConfigValue.visible(false);
configValues.put(undefinedConfigKey, undefinedConfigValue);
}
{code}
# The undefined configs can be returned by custom code here: 
{{Connector.validate()}}

{{AbstractHerder.generateResult()}} method already contains the check for 
undefined configs, but it doesn't increase {{errorCount}} variable
{code:java}
int errorCount = 0;
List configInfoList = new LinkedList<>();

Map configValueMap = new HashMap<>();
for (ConfigValue configValue: configValues) {
String configName = configValue.name();
configValueMap.put(configName, configValue);
if (!configKeys.containsKey(configName)) {
configValue.addErrorMessage("Configuration is not defined: " + 
configName);
configInfoList.add(new ConfigInfo(null, convertConfigValue(configValue, 
null)));
}
}
{code}
so Kafka Connect validation ignores the errors.

I suggest to add the following code after adding of the error message:
{code:java}
errorCount += configValue.errorMessages().size();
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2963

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7394; OffsetsForLeaderEpoch supports topic describe access 
(#5634)

--
[...truncated 2.69 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-372: Naming Joins and Grouping

2018-09-13 Thread Matthias J. Sax
I don't know what Samza does, however, Flink requires users to specify
names similar to this proposal to be able to re-identify state in case
the topology gets altered between deployments.

Flink only has state they need to worry about. For Kafka Streams, it's
state plus repartition topics.


-Matthias

On 9/13/18 1:48 AM, Eno Thereska wrote:
> Hi folks,
> 
> I know we don't normally have a "Related work" section in KIPs, but
> sometimes I find it useful to see what others have done in similar cases.
> Since this will be important for rolling re-deployments, I wonder what
> other frameworks like Flink (or Samza) have done in these cases. Perhaps
> they have done nothing, in which case it's fine to do this from first
> principles, but IMO it would be good to know just to make sure we're
> heading in the right direction.
> 
> Also I don't get a good feel for how much work this will be for an end user
> who is doing the rolling deployment, perhaps an end-to-end example would
> help.
> 
> Thanks
> Eno
> 
> On Thu, Sep 13, 2018 at 6:22 AM, Matthias J. Sax 
> wrote:
> 
>> Follow up comments:
>>
>> 1) We should either use `[app-id]-this|other-[join-name]-repartition` or
>> `app-id]-[join-name]-left|right-repartition` but we should not change
>> the pattern depending if the user specifies a name of not. I am fine
>> with both patterns---just want to make sure with stick with one.
>>
>> 2) I didn't see why we would need to do this in this KIP. KIP-307 seems
>> to be orthogonal, and thus KIP-372 should not change any processor
>> names, but KIP-307 should define a holistic strategy for all processor.
>> Otherwise, we might up with different strategies or revert what we
>> decide in this KIP if it's not compatible with KIP-307.
>>
>>
>> -Matthias
>>
>>
>> On 9/12/18 6:28 PM, Guozhang Wang wrote:
>>> Hello Bill,
>>>
>>> I made a pass over your proposal and here are some questions:
>>>
>>> 1. For Joined names, the current proposal is to define the repartition
>>> topic names as
>>>
>>> * [app-id]-this-[join-name]-repartition
>>>
>>> * [app-id]-other-[join-name]-repartition
>>>
>>>
>>> And if [join-name] not specified, stay the same, which is:
>>>
>>> * [previous-processor-name]-repartition for both Stream-Stream (S-S)
>> join
>>> and S-T join
>>>
>>> I think it is more natural to rename it to
>>>
>>> * [app-id]-[join-name]-left-repartition
>>>
>>> * [app-id]-[join-name]-right-repartition
>>>
>>>
>>> 2. I'd suggest to use the name to also define the corresponding processor
>>> names accordingly, in addition to the repartition topic names. Note that
>>> for joins, this may be overlapping with KIP-307
>>> > 307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL>
>>> as
>>> it also have proposals for defining processor names for join operators as
>>> well.
>>>
>>> 3. Could you also specify how this would affect the optimization for
>>> merging multiple repartition topics?
>>>
>>> 4. In the "Compatibility, Deprecation, and Migration Plan" section, could
>>> you also mention the following scenarios, if any of the upgrade path
>> would
>>> be changed:
>>>
>>>  a) changing user DSL code: under which scenarios users can now do a
>>> rolling bounce instead of resetting applications.
>>>
>>>  b) upgrading from older version to new version, with all the names
>>> specified, and with optimization turned on. E.g. say we have the code
>>> written in 2.1 with all names specified, and now upgrading to 2.2 with
>> new
>>> optimizations that may potentially change the repartition topics. Is that
>>> always safe to do?
>>>
>>>
>>>
>>> Guozhang
>>>
>>>
>>> On Wed, Sep 12, 2018 at 4:52 PM, Bill Bejeck  wrote:
>>>
 All I'd like to start a discussion on KIP-372 for the naming of joins
>> and
 grouping operations in Kafka Streams.

 The KIP page can be found here:
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-
 372%3A+Naming+Joins+and+Grouping

 I look forward to feedback and comments.

 Thanks,
 Bill

>>>
>>>
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-367 Introduce close(Duration) to Producer and AdminClient instead of close(long, TimeUnit)

2018-09-13 Thread Bill Bejeck
+1

-Bill

On Thu, Sep 13, 2018 at 4:44 AM Mickael Maison 
wrote:

> +1 (non-binding)
> Thanks!
> On Thu, Sep 13, 2018 at 7:05 AM Manikumar 
> wrote:
> >
> > +1 (non-binding)
> >
> > Thanks for the KIP.
> >
> > On Thu, Sep 13, 2018 at 10:38 AM Harsha  wrote:
> >
> > > +1 (Binding).
> > > Thanks,
> > > Harsha
> > >
> > > On Wed, Sep 12, 2018, at 9:06 PM, vito jeng wrote:
> > > > +1
> > > >
> > > >
> > > >
> > > > ---
> > > > Vito
> > > >
> > > > On Mon, Sep 10, 2018 at 4:52 PM, Dongjin Lee 
> wrote:
> > > >
> > > > > +1. (Non-binding)
> > > > >
> > > > > On Mon, Sep 10, 2018 at 4:13 AM Matthias J. Sax <
> matth...@confluent.io
> > > >
> > > > > wrote:
> > > > >
> > > > > > Thanks a lot for the KIP.
> > > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > >
> > > > > > -Matthias
> > > > > >
> > > > > >
> > > > > > On 9/8/18 11:27 AM, Chia-Ping Tsai wrote:
> > > > > > > Hi All,
> > > > > > >
> > > > > > > I'd like to put KIP-367 to the vote.
> > > > > > >
> > > > > > >
> > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=89070496
> > > > > > >
> > > > > > > --
> > > > > > > Chia-Ping
> > > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > > --
> > > > > *Dongjin Lee*
> > > > >
> > > > > *A hitchhiker in the mathematical world.*
> > > > >
> > > > > *github:  github.com/dongjinleekr
> > > > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > > > slideshare:
> > > > > www.slideshare.net/dongjinleekr
> > > > > *
> > > > >
> > >
>


[jira] [Created] (KAFKA-7408) Truncate to LSO on unclean leader election

2018-09-13 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7408:
--

 Summary: Truncate to LSO on unclean leader election
 Key: KAFKA-7408
 URL: https://issues.apache.org/jira/browse/KAFKA-7408
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


If an unclean leader is elected, we may lose committed transaction data. That 
alone is expected, but what is worse is that a transaction which was previously 
completed (either committed or aborted) may lose its marker and become 
dangling. The transaction coordinator will not know about the unclean leader 
election, so will not know to resend the transaction markers. Consumers with 
read_committed isolation will be stuck because the LSO cannot advance.

To keep this scenario from occurring, it would be better to have the unclean 
leader truncate to the LSO so that there are no dangling transactions. 
Truncating to the LSO is not alone sufficient because the markers which allowed 
the LSO advancement may be at higher offsets. What we can do is let the newly 
elected leader truncate to the LSO and then rewrite all the markers that 
followed it using its own leader epoch (to avoid divergence from followers).

The interesting cases when an unclean leader election occurs are are when a 
transaction is ongoing. 

1. If a producer is in the middle of a transaction commit, then the coordinator 
may still attempt to write transaction markers. This will either succeed or 
fail depending on the producer epoch in the unclean leader. If the epoch 
matches, then the WriteTxnMarker call will succeed, which will simply be 
ignored by the consumer. If the epoch doesn't match, the WriteTxnMarker call 
will fail and the transaction coordinator can potentially remove the partition 
from the transaction.

2. If a producer is still writing the transaction, then what happens depends on 
the producer state in the unclean leader. If no producer state has been lost, 
then the transaction can continue without impact. Otherwise, the producer will 
likely fail with an OUT_OF_ORDER_SEQUENCE error, which will cause the 
transaction to be aborted by the coordinator. That takes us back to the first 
case.

By truncating the LSO, we ensure that transactions are either preserved in 
whole or they are removed from the log in whole. For an unclean leader 
election, that's probably as good as we can do. But we are ensured that 
consumers will not be blocked by dangling transactions. The only remaining 
situation where a dangling transaction might be left is if one of the 
transaction state partitions has an unclean leader election.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2964

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update minimum required Gradle version to 4.7 (#5642)

--
[...truncated 2.70 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED


Re: [DISCUSS] KIP-372: Naming Joins and Grouping

2018-09-13 Thread Guozhang Wang
Just to clarify on 2): currently KIP-307 do not have proposed APIs for
`groupBy/groupByKey` naming schemes, and for joins its current proposal is
to extend ValueJoiner with Named and hence this part is what I meant to
have "overlaps".

Thinking about it a bit more, since Joined is only used for S-S and S-T
joins but not T-T joins, having the naming schemes on Joined would not be
sufficient, and extending ValueJoiner would indeed be a good choice.

As for groupBy, since it is using KeyValueMapper which is supposed to be
extended with Named in KIP-307, it does not require extending to processor
nodes as well.


Given this, I'm fine with limiting the scope to only repartition topics.


Guozhang

On Wed, Sep 12, 2018 at 10:22 PM, Matthias J. Sax 
wrote:

> Follow up comments:
>
> 1) We should either use `[app-id]-this|other-[join-name]-repartition` or
> `app-id]-[join-name]-left|right-repartition` but we should not change
> the pattern depending if the user specifies a name of not. I am fine
> with both patterns---just want to make sure with stick with one.
>
> 2) I didn't see why we would need to do this in this KIP. KIP-307 seems
> to be orthogonal, and thus KIP-372 should not change any processor
> names, but KIP-307 should define a holistic strategy for all processor.
> Otherwise, we might up with different strategies or revert what we
> decide in this KIP if it's not compatible with KIP-307.
>
>
> -Matthias
>
>
> On 9/12/18 6:28 PM, Guozhang Wang wrote:
> > Hello Bill,
> >
> > I made a pass over your proposal and here are some questions:
> >
> > 1. For Joined names, the current proposal is to define the repartition
> > topic names as
> >
> > * [app-id]-this-[join-name]-repartition
> >
> > * [app-id]-other-[join-name]-repartition
> >
> >
> > And if [join-name] not specified, stay the same, which is:
> >
> > * [previous-processor-name]-repartition for both Stream-Stream (S-S)
> join
> > and S-T join
> >
> > I think it is more natural to rename it to
> >
> > * [app-id]-[join-name]-left-repartition
> >
> > * [app-id]-[join-name]-right-repartition
> >
> >
> > 2. I'd suggest to use the name to also define the corresponding processor
> > names accordingly, in addition to the repartition topic names. Note that
> > for joins, this may be overlapping with KIP-307
> >  307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL>
> > as
> > it also have proposals for defining processor names for join operators as
> > well.
> >
> > 3. Could you also specify how this would affect the optimization for
> > merging multiple repartition topics?
> >
> > 4. In the "Compatibility, Deprecation, and Migration Plan" section, could
> > you also mention the following scenarios, if any of the upgrade path
> would
> > be changed:
> >
> >  a) changing user DSL code: under which scenarios users can now do a
> > rolling bounce instead of resetting applications.
> >
> >  b) upgrading from older version to new version, with all the names
> > specified, and with optimization turned on. E.g. say we have the code
> > written in 2.1 with all names specified, and now upgrading to 2.2 with
> new
> > optimizations that may potentially change the repartition topics. Is that
> > always safe to do?
> >
> >
> >
> > Guozhang
> >
> >
> > On Wed, Sep 12, 2018 at 4:52 PM, Bill Bejeck  wrote:
> >
> >> All I'd like to start a discussion on KIP-372 for the naming of joins
> and
> >> grouping operations in Kafka Streams.
> >>
> >> The KIP page can be found here:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 372%3A+Naming+Joins+and+Grouping
> >>
> >> I look forward to feedback and comments.
> >>
> >> Thanks,
> >> Bill
> >>
> >
> >
> >
>
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk10 #484

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Insure that KafkaStreams client is closed if test fails (#5618)

--
[...truncated 2.47 MB...]
at 
org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.ResolvedArtifactsGraphVisitor.finish(ResolvedArtifactsGraphVisitor.java:81)
at 
org.gradle.api.internal.artifacts.ivyservice.resolveengine.graph.CompositeDependencyGraphVisitor.finish(CompositeDependencyGraphVisitor.java:60)
at 
org.gradle.api.internal.artifacts.ivyservice.resolveengine.graph.builder.DependencyGraphBuilder.assembleResult(DependencyGraphBuilder.java:412)
at 
org.gradle.api.internal.artifacts.ivyservice.resolveengine.graph.builder.DependencyGraphBuilder.resolve(DependencyGraphBuilder.java:130)
at 
org.gradle.api.internal.artifacts.ivyservice.resolveengine.DefaultArtifactDependencyResolver.resolve(DefaultArtifactDependencyResolver.java:123)
at 
org.gradle.api.internal.artifacts.ivyservice.DefaultConfigurationResolver.resolveGraph(DefaultConfigurationResolver.java:167)
at 
org.gradle.api.internal.artifacts.ivyservice.ShortCircuitEmptyConfigurationResolver.resolveGraph(ShortCircuitEmptyConfigurationResolver.java:89)
at 
org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingConfigurationResolver.resolveGraph(ErrorHandlingConfigurationResolver.java:73)
... 57 more
Caused by: java.io.IOException: No space left on device
at java.base/java.io.FileOutputStream.writeBytes(Native Method)
at java.base/java.io.FileOutputStream.write(FileOutputStream.java:355)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:154)
... 71 more

==

30: Task failed with an exception.
---
* What went wrong:
Could not resolve all dependencies for configuration 
':streams:upgrade-system-tests-11:testCompileClasspath'.
> java.io.IOException: No space left on device

* Try:
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.

* Exception is:
org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies 
for configuration ':streams:upgrade-system-tests-11:testCompileClasspath'.
at 
org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingConfigurationResolver.wrapException(ErrorHandlingConfigurationResolver.java:103)
at 
org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingConfigurationResolver.resolveGraph(ErrorHandlingConfigurationResolver.java:75)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$5.run(DefaultConfiguration.java:533)
at 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:300)
at 
org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:292)
at 
org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:174)
at 
org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:90)
at 
org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.resolveGraphIfRequired(DefaultConfiguration.java:524)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.resolveToStateOrLater(DefaultConfiguration.java:509)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$1800(DefaultConfiguration.java:123)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getSelectedArtifacts(DefaultConfiguration.java:1037)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:1025)
at 
org.gradle.api.internal.file.AbstractFileCollection.iterator(AbstractFileCollection.java:76)
at 
org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.iterator(DefaultConfiguration.java:439)
at 
org.gradle.api.internal.changedetection.state.DefaultFileSystemSnapshotter$FileCollectionVisitorImpl.visitCollection(DefaultFileSystemSnapshotter.java:254)
at 
org.gradle.api.internal.file.AbstractFileCollection.visitRootElements(AbstractFileCollection.java:282)
at 
org.gradle.api.internal.file.CompositeFileCollection.visitRootElements(CompositeFileCollection.java:206)
at 
org.gradle.api.internal.changedetection.state.DefaultFileSystemSnapshotter.snapshot(DefaultFileSystemSnapshotter.java:139)
at 

Build failed in Jenkins: kafka-trunk-jdk10 #485

2018-09-13 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: remote: Counting objects: 10522, done.
remote: Compressing objects:   2% (1/46)   remote: Compressing objects: 
  4% (2/46)   remote: Compressing objects:   6% (3/46)   
remote: Compressing objects:   8% (4/46)   remote: Compressing objects: 
 10% (5/46)   remote: Compressing objects:  13% (6/46)   
remote: Compressing objects:  15% (7/46)   remote: Compressing objects: 
 17% (8/46)   remote: Compressing objects:  19% (9/46)   
remote: Compressing objects:  21% (10/46)   remote: Compressing 
objects:  23% (11/46)   remote: Compressing objects:  26% (12/46)   
remote: Compressing objects:  28% (13/46)   remote: Compressing 
objects:  30% (14/46)   remote: Compressing objects:  32% (15/46)   
remote: Compressing objects:  34% (16/46)   remote: Compressing 
objects:  36% (17/46)   remote: Compressing objects:  39% (18/46)   
remote: Compressing objects:  41% (19/46)   remote: Compressing 
objects:  43% (20/46)   remote: Compressing objects:  45% (21/46)   
remote: Compressing objects:  47% (22/46)   remote: Compressing 
objects:  50% (23/46)   remote: Compressing objects:  52% (24/46)   
remote: Compressing objects:  54% (25/46)   remote: Compressing 
objects:  56% (26/46)   remote: Compressing objects:  58% (27/46)   
remote: Compressing objects:  60% (28/46)   remote: Compressing 
objects:  63% (29/46)   remote: Compressing objects:  65% (30/46)   
remote: Compressing objects:  67% (31/46)   remote: Compressing 
objects:  69% (32/46)   remote: Compressing objects:  71% (33/46)   
remote: Compressing objects:  73% (34/46)   remote: Compressing 
objects:  76% (35/46)   remote: Compressing objects:  78% (36/46)   
remote: Compressing objects:  80% (37/46)   remote: Compressing 
objects:  82% (38/46)   remote: Compressing objects:  84% (39/46)   
remote: Compressing objects:  86% (40/46)   remote: Compressing 
objects:  89% (41/46)   remote: Compressing objects:  91% (42/46)   
remote: Compressing objects:  93% (43/46)   remote: Compressing 
objects:  95% (44/46)   remote: Compressing objects:  97% (45/46)   
remote: Compressing objects: 100% (46/46)   remote: Compressing 
objects: 100% (46/46), done.
Receiving objects:   0% (1/10522)   Receiving objects:   1% (106/10522)   
Receiving objects:   2% (211/10522)   Receiving objects:   3% (316/10522)   
Receiving objects:   4% (421/10522)   Receiving objects:   5% (527/10522)   
Receiving objects:   6% (632/10522)   Receiving objects:   7% (737/10522)   
Receiving objects:   8% (842/10522)   Receiving objects:   9% (947/10522)   
Receiving objects:  10% (1053/10522)   Receiving objects:  11% (1158/10522)   
Receiving objects:  12% (1263/10522)   Receiving objects:  13% (1368/10522)   
Receiving objects:  14% (1474/10522)   

Re: [DISCUSS] KIP-372: Naming Joins and Grouping

2018-09-13 Thread John Roesler
Hey all,

I think it's slightly out of scope for this KIP, but I'm not sure it's
right to add a name to ValueJoiner or KeyValueMapper.
Both of those are "functional interfaces", that is, they are basically
named functions.

It seems like we should preserve this property both to provide a clean
separation between the operation *logic* and the operator *config*.

It's a good point that `Joined` doesn't appear in the KTable join. However,
KTable join does have the variant that accepts "Materialized".
This makes it similar to the grouping operators that currently accept
"Serialized", namely the operator is already configurable, but only in
a narrow way (we can only configure the materialization of the table or the
serialization of the grouping).

Consider as an alternative the KStream.join operation that takes a config
object called "Joined". This implies no more than that we are
configuring the join operation. If we are deciding to allow adding a name
to the operation, it's natural to add it to the operation's config.

Conversely, we also want to name the KTable.join operation, not the
materialization itself (which already has a name with separate semantics).

To me, this suggests that, just like we propose to subsume the Serialized
config for grouping into a more general config called Grouped, we should
also want to do something similar and replace `KTable#join(KTable<>,
ValueJoiner<>, Materialized<>)` with one taking a more general
configuration, maybe:
`KTable#join(KTable<>, ValueJoiner<>, TableJoined<>)`?

Does this make sense?

Thanks,
-John

On Thu, Sep 13, 2018 at 3:45 PM Guozhang Wang  wrote:

> Just to clarify on 2): currently KIP-307 do not have proposed APIs for
> `groupBy/groupByKey` naming schemes, and for joins its current proposal is
> to extend ValueJoiner with Named and hence this part is what I meant to
> have "overlaps".
>
> Thinking about it a bit more, since Joined is only used for S-S and S-T
> joins but not T-T joins, having the naming schemes on Joined would not be
> sufficient, and extending ValueJoiner would indeed be a good choice.
>
> As for groupBy, since it is using KeyValueMapper which is supposed to be
> extended with Named in KIP-307, it does not require extending to processor
> nodes as well.
>
>
> Given this, I'm fine with limiting the scope to only repartition topics.
>
>
> Guozhang
>
> On Wed, Sep 12, 2018 at 10:22 PM, Matthias J. Sax 
> wrote:
>
> > Follow up comments:
> >
> > 1) We should either use `[app-id]-this|other-[join-name]-repartition` or
> > `app-id]-[join-name]-left|right-repartition` but we should not change
> > the pattern depending if the user specifies a name of not. I am fine
> > with both patterns---just want to make sure with stick with one.
> >
> > 2) I didn't see why we would need to do this in this KIP. KIP-307 seems
> > to be orthogonal, and thus KIP-372 should not change any processor
> > names, but KIP-307 should define a holistic strategy for all processor.
> > Otherwise, we might up with different strategies or revert what we
> > decide in this KIP if it's not compatible with KIP-307.
> >
> >
> > -Matthias
> >
> >
> > On 9/12/18 6:28 PM, Guozhang Wang wrote:
> > > Hello Bill,
> > >
> > > I made a pass over your proposal and here are some questions:
> > >
> > > 1. For Joined names, the current proposal is to define the repartition
> > > topic names as
> > >
> > > * [app-id]-this-[join-name]-repartition
> > >
> > > * [app-id]-other-[join-name]-repartition
> > >
> > >
> > > And if [join-name] not specified, stay the same, which is:
> > >
> > > * [previous-processor-name]-repartition for both Stream-Stream (S-S)
> > join
> > > and S-T join
> > >
> > > I think it is more natural to rename it to
> > >
> > > * [app-id]-[join-name]-left-repartition
> > >
> > > * [app-id]-[join-name]-right-repartition
> > >
> > >
> > > 2. I'd suggest to use the name to also define the corresponding
> processor
> > > names accordingly, in addition to the repartition topic names. Note
> that
> > > for joins, this may be overlapping with KIP-307
> > >  > 307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL>
> > > as
> > > it also have proposals for defining processor names for join operators
> as
> > > well.
> > >
> > > 3. Could you also specify how this would affect the optimization for
> > > merging multiple repartition topics?
> > >
> > > 4. In the "Compatibility, Deprecation, and Migration Plan" section,
> could
> > > you also mention the following scenarios, if any of the upgrade path
> > would
> > > be changed:
> > >
> > >  a) changing user DSL code: under which scenarios users can now do a
> > > rolling bounce instead of resetting applications.
> > >
> > >  b) upgrading from older version to new version, with all the names
> > > specified, and with optimization turned on. E.g. say we have the code
> > > written in 2.1 with all names specified, and now upgrading to 2.2 with
> > new
> > > 

Re: [VOTE] KIP-365: Materialized, Serialized, Joined, Consumed and Produced with implicit Serde

2018-09-13 Thread Joan Goyeau
Ok so a +1 is non binding by default.

On Tue, 11 Sep 2018 at 17:25 Matthias J. Sax  wrote:

> I only count 3 binding votes (Guozhang, Matthias, Damian). Plus 4
> non-binding (John, Ted, Bill, Dongjin) -- or 5 if you vote your own KIP :)
>
> -Matthias
>
> On 9/11/18 12:53 AM, Joan Goyeau wrote:
> > Ok, so this is now accepted.
> >
> > Binding votes: +5
> > Non-binding votes: +2
> >
> > Thanks
> >
> > On Wed, 5 Sep 2018 at 21:38 John Roesler  wrote:
> >
> >> Hi Joan,
> >>
> >> Damian makes 3 binding votes, and the vote has been open longer than 72
> >> hours, so your KIP vote has passed!
> >>
> >> It's customary for you to send a final reply to this thread stating that
> >> the vote has passed, and stating the number of binding and non-binding
> +1s.
> >>
> >> Also please update the current state to Accepted here:
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-365%3A+Materialized%2C+Serialized%2C+Joined%2C+Consumed+and+Produced+with+implicit+Serde
> >>
> >> And move your kip into the Adopted table here:
> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-365%3A+Materialized%2C+Serialized%2C+Joined%2C+Consumed+and+Produced+with+implicit+Serde
> >> (release will be 2.1)
> >>
> >> And finally, move it to Accepted here:
> >> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams
> >>
> >> Lastly, you might want to make another pass over the PR and call for
> final
> >> reviews.
> >>
> >> Thanks so much for managing this process,
> >> -John
> >>
> >> On Mon, Sep 3, 2018 at 11:00 AM Damian Guy 
> wrote:
> >>
> >>> +1
> >>>
> >>> On Sun, 2 Sep 2018 at 15:20 Matthias J. Sax 
> >> wrote:
> >>>
>  +1 (binding)
> 
>  On 9/1/18 2:40 PM, Guozhang Wang wrote:
> > +1 (binding).
> >
> > On Mon, Aug 27, 2018 at 5:20 PM, Dongjin Lee 
> >>> wrote:
> >
> >> +1 (non-binding)
> >>
> >> On Tue, Aug 28, 2018 at 8:53 AM Bill Bejeck 
> >>> wrote:
> >>
> >>> +1
> >>>
> >>> -Bill
> >>>
> >>> On Mon, Aug 27, 2018 at 3:24 PM Ted Yu 
> >> wrote:
> >>>
>  +1
> 
>  On Mon, Aug 27, 2018 at 12:18 PM John Roesler 
> >> wrote:
> 
> > +1 (non-binding)
> >
> > On Sat, Aug 25, 2018 at 1:16 PM Joan Goyeau 
> >>> wrote:
> >
> >> Hi,
> >>
> >> We want to make sure that we always have a serde for all
> >>> Materialized,
> >> Serialized, Joined, Consumed and Produced.
> >> For that we can make use of the implicit parameters in Scala.
> >>
> >> KIP:
> >>
> >>
> >
> 
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>
> 
> >>>
> >>
> 365%3A+Materialized%2C+Serialized%2C+Joined%2C+Consumed+and+Produced+with+
> >> implicit+Serde
> >>
> >> Github PR: https://github.com/apache/kafka/pull/5551
> >>
> >> Please make your votes.
> >> Thanks
> >>
> >
> 
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>>
> >>> *github:  github.com/dongjinleekr
> >>> linkedin: kr.linkedin.com/in/
> >> dongjinleekr
> >>> slideshare:
>  www.slideshare.net/
> >> dongjinleekr
> >>> *
> >>>
> >>
> >
> >
> >
> 
> 
> >>>
> >>
> >
>
>


Build failed in Jenkins: kafka-trunk-jdk10 #486

2018-09-13 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2002)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1970)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1966)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor61.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:929)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:903)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:855)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
at com.sun.proxy.$Proxy118.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:876)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

[jira] [Created] (KAFKA-7410) Rack aware partitions assignment create unbalanced broker assignments on unbalanced racks

2018-09-13 Thread Lucas Bradstreet (JIRA)
Lucas Bradstreet created KAFKA-7410:
---

 Summary: Rack aware partitions assignment create unbalanced broker 
assignments on unbalanced racks
 Key: KAFKA-7410
 URL: https://issues.apache.org/jira/browse/KAFKA-7410
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 1.1.1
Reporter: Lucas Bradstreet
 Attachments: AdminUtilsTest.scala

AdminUtils creates a bad partition assignment when the number of brokers on 
each rack is unbalanced, e.g. 80 brokers rack A, 20 brokers rack B, 15 brokers 
rack C. Under such a scenario, a single broker from rack C may be assigned over 
and over again, when more balanced allocations exist.

kafka.admin.AdminUtils.getRackAlternatedBrokerList is supposed to create a list 
of brokers alternating by rack, however once it runs out of brokers on the 
racks with fewer brokers, it ends up placing a run of brokers from the same 
rack together as rackIterator.hasNext will return false for the other racks.
{code:java}
while (result.size < brokerRackMap.size) {
  val rackIterator = brokersIteratorByRack(racks(rackIndex))
  if (rackIterator.hasNext)
result += rackIterator.next()
  rackIndex = (rackIndex + 1) % racks.length
}{code}
Once assignReplicasToBrokersRackAware hits the run of brokers from the same 
rack, when choosing the replicas to go along with the leader on the rack with 
the most brokers e.g. C, it will skip all of the C brokers until it wraps 
around to the first broker in the alternated list, and choose the first broker 
in the alternated list.

 
{code:java}
if ((!racksWithReplicas.contains(rack) || racksWithReplicas.size == numRacks)
&& (!brokersWithReplicas.contains(broker) || brokersWithReplicas.size == 
numBrokers)) {
replicaBuffer += broker
racksWithReplicas += rack
brokersWithReplicas += broker
done = true
}
k += 1
{code}
It does so for each of the remaining brokers for C, choosing the first broker 
in the alternated list until it's allocated all of the partitions.

See the attached sample code for more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2966

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Insure that KafkaStreams client is closed if test fails (#5618)

[wangguoz] MINOR: Enable ignored upgrade system tests - trunk (#5605)

--
[...truncated 2.68 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED


Jenkins build is back to normal : kafka-trunk-jdk8 #2965

2018-09-13 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk10 #488

2018-09-13 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-361: Add Consumer Configuration to Disable Auto Topic Creation

2018-09-13 Thread Dhruvil Shah
Hi all,

I updated the KIP with the discussion from this thread. I left the warning
message and deprecation out for now because we require the configuration be
set to true (i.e. auto topic creation allowed) when using brokers older
than 0.11.0.

If there is no more feedback, I will start a VOTE thread tomorrow.

Thanks,
Dhruvil

On Thu, Sep 6, 2018 at 9:52 PM Matthias J. Sax 
wrote:

> What is the status of this KIP?
>
> I think you can start a VOTE Dhruvil.
>
>
> -Matthias
>
> On 8/23/18 9:52 AM, Ismael Juma wrote:
> > Yeah, the reason why we want to deprecate the auto create functionality
> is
> > that it happens when a metadata request is done instead of when a write
> > operation happens. So, there's no reason to differentiate between the
> two.
> >
> > Ismael
> >
> > On Thu, Aug 23, 2018 at 8:16 AM Andrew Otto  wrote:
> >
> >> Ah, I just realized that as proposed this is only for the Java consumer
> >> client, correct?  Would it be possible to make this a broker config,
> like
> >> the current one?  Something like:
> >>
> >> auto.create.topics.enable=true # allow both producer and consumer to
> create
> >> auto.create.topics.enable=consumer # allow only consumer to create
> >> auto.create.topics.enable=producer # allow only producer to create
> >> auto.create.topics.enable=false # deny any auto topic creation
> >>
> >> Perhaps the broker doesn’t differentiate between the type of client
> >> connection. If not, I guess this wouldn’t be possible.
> >>
> >>
> >>
> >> On Thu, Aug 23, 2018 at 11:08 AM Andrew Otto 
> wrote:
> >>
> >>> Yup :)
> >>>
> >>> On Thu, Aug 23, 2018 at 11:04 AM Ismael Juma 
> wrote:
> >>>
>  Andrew, one question: you are relying on auto topic creation for the
>  producer and that's why you can't just disable it?
> 
>  On Thu, Aug 23, 2018 at 8:01 AM Ismael Juma 
> wrote:
> 
> > Thanks for sharing Andrew!
> >
> > Ismael
> >
> > On Thu, Aug 23, 2018 at 7:57 AM Andrew Otto 
> >> wrote:
> >
> >> We recently had a pretty serious Kafka outage
> >> <
> >>
> 
> >>
> https://wikitech.wikimedia.org/wiki/Incident_documentation/20180711-kafka-eqiad#Summary
> >>>
> >> caused by a bug in one of our consumers that caused it to create new
> >> topics
> >> in an infinite loop AKA a topic bomb!  Having consumers restricted
> >> from
> >> creating topics would have prevented this for us.
> >>
> >> On Thu, Aug 23, 2018 at 4:27 AM Ismael Juma 
> >> wrote:
> >>
> >>> Generally, I think positive configs (`allow` instead of `suppress`)
>  are
> >>> easier to understand.
> >>>
> >>> Ismael
> >>>
> >>> On Wed, Aug 22, 2018 at 11:05 PM Matthias J. Sax <
>  matth...@confluent.io
> >>>
> >>> wrote:
> >>>
>  Thanks for the summary!
> 
>  We might want to add a diagram/table to the docs when we add this
>  feature (with whatever config name we choose) to explain how
> >> broker
>  config `auto.create.topics.enable` and the consumer config work
> >> together.
> 
>  I think both options are equally easy to understand. "allow"
> >> means
>  follow the broker config, while "suppress" implies ignore the
>  broker
>  config and don't auto-create.
> 
> 
>  -Matthias
> 
> 
>  On 8/22/18 10:36 PM, Dhruvil Shah wrote:
> > *"suppress" is the opposite of "allow", so
> > setting suppress.auto.create.topics=false would mean that we do
> >> _not_
>  allow
> > auto topic creation; when set to true, the server configuration
>  will
> > determine whether we allow automatic creation or not.*
> >
> > Sorry, I meant suppress.auto.create.topics=true above to
> >> disallow
> >> auto
> > topic creation.
> >
> >
> > On Wed, Aug 22, 2018 at 10:34 PM Dhruvil Shah <
>  dhru...@confluent.io
> >>>
>  wrote:
> >
> >> To be clear, we will allow auto topic creation only when
> >> server
> >> config
> >> auto.create.topics.enable=true and consumer config
> >> allow.auto.create.topics=true; when either is false, we would
>  not
> >>> create
> >> the topic if it does not exist.
> >>
> >> "suppress" is the opposite of "allow", so
> >> setting suppress.auto.create.topics=false would mean that we
> >> do
> >> _not_
>  allow
> >> auto topic creation; when set to true, the server
> >> configuration
> >> will
> >> determine whether we allow automatic creation or not.
> >>
> >> I think "allow" is easier to understand but I am open to
> >> suggestions.
> >>
> >> - Dhruvil
> >>
> >> On Wed, Aug 22, 2018 at 6:53 PM Brandon Kirchner <
> >> brandon.kirch...@gmail.com> wrote:
> >>
> >>> “allow=false” seems 

Build failed in Jenkins: kafka-trunk-jdk10 #487

2018-09-13 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: remote: Counting objects: 10522, done.
remote: Compressing objects:   2% (1/42)   remote: Compressing objects: 
  4% (2/42)   remote: Compressing objects:   7% (3/42)   
remote: Compressing objects:   9% (4/42)   remote: Compressing objects: 
 11% (5/42)   remote: Compressing objects:  14% (6/42)   
remote: Compressing objects:  16% (7/42)   remote: Compressing objects: 
 19% (8/42)   remote: Compressing objects:  21% (9/42)   
remote: Compressing objects:  23% (10/42)   remote: Compressing 
objects:  26% (11/42)   remote: Compressing objects:  28% (12/42)   
remote: Compressing objects:  30% (13/42)   remote: Compressing 
objects:  33% (14/42)   remote: Compressing objects:  35% (15/42)   
remote: Compressing objects:  38% (16/42)   remote: Compressing 
objects:  40% (17/42)   remote: Compressing objects:  42% (18/42)   
remote: Compressing objects:  45% (19/42)   remote: Compressing 
objects:  47% (20/42)   remote: Compressing objects:  50% (21/42)   
remote: Compressing objects:  52% (22/42)   remote: Compressing 
objects:  54% (23/42)   remote: Compressing objects:  57% (24/42)   
remote: Compressing objects:  59% (25/42)   remote: Compressing 
objects:  61% (26/42)   remote: Compressing objects:  64% (27/42)   
remote: Compressing objects:  66% (28/42)   remote: Compressing 
objects:  69% (29/42)   remote: Compressing objects:  71% (30/42)   
remote: Compressing objects:  73% (31/42)   remote: Compressing 
objects:  76% (32/42)   remote: Compressing objects:  78% (33/42)   
remote: Compressing objects:  80% (34/42)   remote: Compressing 
objects:  83% (35/42)   remote: Compressing objects:  85% (36/42)   
remote: Compressing objects:  88% (37/42)   remote: Compressing 
objects:  90% (38/42)   remote: Compressing objects:  92% (39/42)   
remote: Compressing objects:  95% (40/42)   remote: Compressing 
objects:  97% (41/42)   remote: Compressing objects: 100% (42/42)   
remote: Compressing objects: 100% (42/42), done.
Receiving objects:   0% (1/10522)   Receiving objects:   1% (106/10522)   
Receiving objects:   2% (211/10522)   Receiving objects:   3% (316/10522)   
Receiving objects:   4% (421/10522)   Receiving objects:   5% (527/10522)   
Receiving objects:   6% (632/10522)   Receiving objects:   7% (737/10522)   
Receiving objects:   8% (842/10522)   Receiving objects:   9% (947/10522)   
Receiving objects:  10% (1053/10522)   Receiving objects:  11% (1158/10522)   
Receiving objects:  12% (1263/10522)   Receiving objects:  13% (1368/10522)   
Receiving objects:  14% (1474/10522)   Receiving objects:  15% (1579/10522)   
Receiving objects:  16% (1684/10522)   Receiving objects:  17% (1789/10522)   
Receiving objects:  18% (1894/10522)   Receiving objects:  19% (2000/10522)   
Receiving objects:  20% (2105/10522)   

[jira] [Created] (KAFKA-7409) Validate topic configs prior to topic creation

2018-09-13 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7409:
--

 Summary: Validate topic configs prior to topic creation
 Key: KAFKA-7409
 URL: https://issues.apache.org/jira/browse/KAFKA-7409
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


There are some validation checks that we do on startup when constructing the 
{{LogConfig}} object using {{fromProps}} which we do not verify on topic 
creation. The message format version check is one of them. An invalid value, 
once set, can prevent broker startup. We should make the validation consistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-13 Thread Guozhang Wang
+1 (binding), thank you Nikolay!

Guozhang

On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax 
wrote:

> Thanks for the KIP.
>
> +1 (binding)
>
>
> -Matthias
>
> On 9/5/18 8:52 AM, John Roesler wrote:
> > I'm a +1 (non-binding)
> >
> > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov 
> wrote:
> >
> >> Dear commiters.
> >>
> >> Please, vote on a KIP.
> >>
> >> В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> >>> Hi Nikolay,
> >>>
> >>> You can start a PR any time, but we cannot per it (and probably won't
> do
> >>> serious reviews) until after the KIP is voted and approved.
> >>>
> >>> Sometimes people start a PR during discussion just to help provide more
> >>> context, but it's not required (and can also be distracting because the
> >> KIP
> >>> discussion should avoid implementation details).
> >>>
> >>> Let's wait one more day for any other comments and plan to start the
> vote
> >>> on Monday if there are no other debates.
> >>>
> >>> Once you start the vote, you have to leave it up for at least 72 hours,
> >> and
> >>> it requires 3 binding votes to pass. Only Kafka Committers have binding
> >>> votes (https://kafka.apache.org/committers).
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck 
> wrote:
> >>>
>  Hi Nickolay,
> 
>  Thanks for the clarification.
> 
>  -Bill
> 
>  On Fri, Aug 31, 2018 at 11:59 AM Nikolay Izhikov  >
>  wrote:
> 
> > Hello, John.
> >
> > This is my first KIP, so, please, help me with kafka development
> >> process.
> >
> > Should I start to work on PR now? Or should I wait for a "+1" from
> > commiters?
> >
> > В Пт, 31/08/2018 в 10:33 -0500, John Roesler пишет:
> >> I see. I guess that once we are in the PR-reviewing phase, we'll
> >> be in
> 
>  a
> >> better position to see what else can/should be done, and we can
> >> talk
> >
> > about
> >> follow-on work at that time.
> >>
> >> Thanks for the clarification,
> >> -John
> >>
> >> On Fri, Aug 31, 2018 at 1:19 AM Nikolay Izhikov <
> >> nizhi...@apache.org>
> >
> > wrote:
> >>
> >>> Hello, Bill
> >>>
>  In the "Proposed Changes" section, there is "Try to reduce the
> >>>
> >>> visibility of methods in next tickets" does that mean eventual
> >
> > deprecation
> >>> and removal?
> >>>
> >>> 1. Some methods will become deprecated. I think they will be
> >> removed
> 
>  in
> >>> the future.
> >>> You can find list of deprecated methods in KIP.
> >>>
> >>> 2. Some internal methods can't be deprecated or hid from the
> >> user for
> >
> > now.
> >>> I was trying to say that we should research possibility to reduce
> >>> visibility of *internal* methods that are *public* now.
> >>> That kind of changes is out of the scope of current KIP, so we
> >> have
> 
>  to
> > do
> >>> it in the next tickets.
> >>>
> >>> I don't expect that internal methods will be removed.
> >>>
> >>> В Чт, 30/08/2018 в 18:59 -0400, Bill Bejeck пишет:
>  Sorry for chiming in late, there was a lot of detail to catch
> >> up
> 
>  on.
> 
>  Overall I'm +1 in the KIP.  But I do have one question about
> >> the
> 
>  KIP
> > in
>  regards to Matthias's comments about defining dual use.
> 
>  In the "Proposed Changes" section, there is "Try to reduce the
> >
> > visibility
>  of methods in next tickets" does that mean eventual
> >> deprecation and
> >>>
> >>> removal?
>  I thought we were aiming to keep the dual use methods? Or does
> >> that
> >
> > imply
>  we'll strive for more clear delineation between DSL and
> >> internal
> 
>  use?
> 
>  Thanks,
>  Bill
> 
> 
> 
>  On Thu, Aug 30, 2018 at 5:59 PM Nikolay Izhikov <
> 
>  nizhi...@apache.org
> >>
> >>>
> >>> wrote:
> 
> > John, thank you.
> >
> > I've updated KIP.
> >
> >
> > Dear commiters, please take a look and share your opinion.
> >
> > В Чт, 30/08/2018 в 14:58 -0500, John Roesler пишет:
> >> Oh! I missed one minor thing: UnlimitedWindows doesn't
> >> need to
> >
> > set
> >>>
> >>> grace
> >> (it currently does not either).
> >>
> >> Otherwise, it looks good to me!
> >>
> >> Thanks so much,
> >> -John
> >>
> >> On Thu, Aug 30, 2018 at 5:30 AM Nikolay Izhikov <
> >
> > nizhi...@apache.org
> >
> > wrote:
> >>
> >>> Hello, John.
> >>>
> >>> I've updated KIP according on your comments.
> >>> Please, take a look.
> >>>
> >>> Are we ready to vot now?
> >>>
> >>> В Ср, 

Re: [DISCUSS] Apache Kafka 2.1.0 Release Plan

2018-09-13 Thread Dongjin Lee
Great. Thanks!

- Dongjin

On Mon, Sep 10, 2018 at 11:48 AM Matthias J. Sax 
wrote:

> Thanks a lot! You are on a run after 1.1.1 release.
>
> I see something coming up for myself in 4 month. :)
>
> On 9/9/18 6:15 PM, Guozhang Wang wrote:
> > Dong, thanks for driving the release!
> >
> > On Sun, Sep 9, 2018 at 5:57 PM, Ismael Juma  wrote:
> >
> >> Thanks for volunteering Dong!
> >>
> >> Ismael
> >>
> >> On Sun, 9 Sep 2018, 17:32 Dong Lin,  wrote:
> >>
> >>> Hi all,
> >>>
> >>> I would like to be the release manager for our next time-based feature
> >>> release 2.1.0.
> >>>
> >>> The recent Kafka release history can be found at
> >>> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
> >> The
> >>> release plan (with open issues and planned KIPs) for 2.1.0 can be found
> >> at
> >>> https://cwiki.apache.org/confluence/pages/viewpage.
> >> action?pageId=91554044.
> >>>
> >>> Here are the dates we have planned for Apache Kafka 2.1.0 release:
> >>>
> >>> 1) KIP Freeze: Sep 24, 2018.
> >>> A KIP must be accepted by this date in order to be considered for this
> >>> release)
> >>>
> >>> 2) Feature Freeze: Oct 1, 2018
> >>> Major features merged & working on stabilization, minor features have
> PR,
> >>> release branch cut; anything not in this state will be automatically
> >> moved
> >>> to the next release in JIRA.
> >>>
> >>> 3) Code Freeze: Oct 15, 2018 (Tentatively)
> >>>
> >>> The KIP and feature freeze date is about 3-4 weeks from now. Please
> plan
> >>> accordingly for the features you want push into Apache Kafka 2.1.0
> >> release.
> >>>
> >>>
> >>> Cheers,
> >>> Dong
> >>>
> >>
> >
> >
> >
>
>

-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*

*github:  github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
slideshare:
www.slideshare.net/dongjinleekr
*


Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-13 Thread Nikolay Izhikov
Hello, Matthias.

> I like the KIP as-is. Feel free to start a VOTE thread.


I'm already started one [1].
Can you vote in it or I should create a new one?


I've updated KIP.
This has been changed:

ReadOnlyWindowStore {
//Deprecated methods.
WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
timeTo);
KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
}

WindowStore {
//New methods.
WindowStoreIterator fetch(K key, Instant from, Duration duration) throws 
IllegalArgumentException;
KeyValueIterator, V> fetch(K from, K to, Instant from, Duration 
duration) throws IllegalArgumentException;
KeyValueIterator, V> fetchAll(Instant from, Duration duration) 
throws IllegalArgumentException;
}

[1] 
https://lists.apache.org/thread.html/e976352e7e42d459091ee66ac790b6a0de7064eac0c57760d50c983b@%3Cdev.kafka.apache.org%3E

В Ср, 12/09/2018 в 15:46 -0700, Matthias J. Sax пишет:
> Great!
> 
> I did not double check the ReadOnlySessionStore interface before, and
> just assumed it would take a timestamp, too. My bad.
> 
> Please update the KIP for ReadOnlyWindowStore and WindowStore.
> 
> I like the KIP as-is. Feel free to start a VOTE thread. Even if there
> might be minor follow up comments, we can vote in parallel.
> 
> 
> -Matthias
> 
> On 9/12/18 1:06 PM, John Roesler wrote:
> > Hi Nikolay,
> > 
> > Yes, the changes we discussed for ReadOnlyXxxStore and XxxStore should be
> > in this KIP.
> > 
> > And you're right, it seems like ReadOnlySessionStore is not necessary to
> > touch, since it doesn't reference any `long` timestamps.
> > 
> > Thanks,
> > -John
> > 
> > On Wed, Sep 12, 2018 at 4:36 AM Nikolay Izhikov  wrote:
> > 
> > > Hello, Matthias.
> > > 
> > > > His proposal is, to deprecate existing methods on `ReadOnlyWindowStore`>
> > > 
> > > and `ReadOnlySessionStore` and add them to `WindowStore` and> 
> > > `SessionStore`
> > > > Does this make sense?
> > > 
> > > You both are experienced Kafka developers, so yes, it does make a sense to
> > > me :).
> > > Do we want to make this change in KIP-358 or it required another KIP?
> > > 
> > > > Btw: the KIP misses `ReadOnlySessionStore` atm.
> > > 
> > > Sorry, but I don't understand you.
> > > As far as I can see, there is only 2 methods in `ReadOnlySessionStore`.
> > > Which method should be migrated to Duration?
> > > 
> > > 
> > > https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlySessionStore.java
> > > 
> > > В Вт, 11/09/2018 в 09:21 -0700, Matthias J. Sax пишет:
> > > > I talked to John offline about his last suggestions, that I originally
> > > > did not fully understand.
> > > > 
> > > > His proposal is, to deprecate existing methods on `ReadOnlyWindowStore`
> > > > and `ReadOnlySessionStore` and add them to `WindowStore` and
> > > > `SessionStore` (note, all singular -- not to be confused with classes
> > > > names plural).
> > > > 
> > > > Btw: the KIP misses `ReadOnlySessionStore` atm.
> > > > 
> > > > The argument is, that the `ReadOnlyXxxStore` interfaces are only exposed
> > > > via Interactive Queries feature and for this part, using `long` is
> > > > undesired. However, for a `Processor` that reads/writes stores on the
> > > > hot code path, we would like to avoid the object creation overhead and
> > > > stay with `long`. Note, that a `Processor` would use the "read-write"
> > > > interfaces and thus, we can add the more efficient read methods using
> > > > `long` there.
> > > > 
> > > > Does this make sense?
> > > > 
> > > > 
> > > > -Matthias
> > > > 
> > > > On 9/11/18 12:20 AM, Nikolay Izhikov wrote:
> > > > > Hello, Guozhang, Bill.
> > > > > 
> > > > > > 1) I'd suggest keeping `Punctuator#punctuate(long timestamp)` as is
> > > > > 
> > > > > I am agree with you.
> > > > > Currently, `Punctuator` edits are not included in KIP.
> > > > > 
> > > > > > 2) I'm fine with keeping KeyValueStore extending
> > > 
> > > ReadOnlyKeyValueStore
> > > > > 
> > > > > Great, currently, there is no suggested API change in `KeyValueStore`
> > > 
> > > or `ReadOnlyKeyValueStore`.
> > > > > 
> > > > > Seems, you agree with all KIP details.
> > > > > Can you vote, please? [1]
> > > > > 
> > > > > [1]
> > > 
> > > https://lists.apache.org/thread.html/e976352e7e42d459091ee66ac790b6a0de7064eac0c57760d50c983b@%3Cdev.kafka.apache.org%3E
> > > > > 
> > > > > 
> > > > > В Пн, 10/09/2018 в 19:49 -0400, Bill Bejeck пишет:
> > > > > > Hi Nikolay,
> > > > > > 
> > > > > > I'm a +1 to points 1 and 2 above from Guozhang.
> > > > > > 
> > > > > > Thanks,
> > > > > > Bill
> > > > > > 
> > > > > > On Mon, Sep 10, 2018 at 6:58 PM Guozhang Wang 
> > > 
> > > wrote:
> > > > > > 
> > > > > > > Hello Nikolay,
> > > > > > > 
> > > > > > > Thanks for picking this up! Just sharing my two cents:
> > > > > > > 
> > > > > > > 1) I'd suggest keeping `Punctuator#punctuate(long timestamp)` as
> > > 
> > > is since
> > > > > > > 

Re: [DISCUSS] KIP-372: Naming Joins and Grouping

2018-09-13 Thread Matthias J. Sax
Three more comments:

(1) For `Grouped` should we add `with(String name, Serde key,
Serde value)` to allow specifying all parameters at once?

Produced/Consumed/Serialized etc follow a similar pattern. There is one
static method for each config parameter, plus one static method that
accepts all parameters. Might be more consistent if we follow this pattern.

(2) It seems that `Serialized` is only used in `groupBy` and
`groupByKey` -- because both methods accepting `Serialized` parameter
are deprecated and replaced with methods accepting `Grouped`, it seems
that we also want to deprecate `Serialized`?

(3) About naming repartition topics: thinking about this once more, I
actually prefer to use `left|right` instead of `this|other` :)


-Matthias


On 9/13/18 6:45 AM, Matthias J. Sax wrote:
> I don't know what Samza does, however, Flink requires users to specify
> names similar to this proposal to be able to re-identify state in case
> the topology gets altered between deployments.
> 
> Flink only has state they need to worry about. For Kafka Streams, it's
> state plus repartition topics.
> 
> 
> -Matthias
> 
> On 9/13/18 1:48 AM, Eno Thereska wrote:
>> Hi folks,
>>
>> I know we don't normally have a "Related work" section in KIPs, but
>> sometimes I find it useful to see what others have done in similar cases.
>> Since this will be important for rolling re-deployments, I wonder what
>> other frameworks like Flink (or Samza) have done in these cases. Perhaps
>> they have done nothing, in which case it's fine to do this from first
>> principles, but IMO it would be good to know just to make sure we're
>> heading in the right direction.
>>
>> Also I don't get a good feel for how much work this will be for an end user
>> who is doing the rolling deployment, perhaps an end-to-end example would
>> help.
>>
>> Thanks
>> Eno
>>
>> On Thu, Sep 13, 2018 at 6:22 AM, Matthias J. Sax 
>> wrote:
>>
>>> Follow up comments:
>>>
>>> 1) We should either use `[app-id]-this|other-[join-name]-repartition` or
>>> `app-id]-[join-name]-left|right-repartition` but we should not change
>>> the pattern depending if the user specifies a name of not. I am fine
>>> with both patterns---just want to make sure with stick with one.
>>>
>>> 2) I didn't see why we would need to do this in this KIP. KIP-307 seems
>>> to be orthogonal, and thus KIP-372 should not change any processor
>>> names, but KIP-307 should define a holistic strategy for all processor.
>>> Otherwise, we might up with different strategies or revert what we
>>> decide in this KIP if it's not compatible with KIP-307.
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 9/12/18 6:28 PM, Guozhang Wang wrote:
 Hello Bill,

 I made a pass over your proposal and here are some questions:

 1. For Joined names, the current proposal is to define the repartition
 topic names as

 * [app-id]-this-[join-name]-repartition

 * [app-id]-other-[join-name]-repartition


 And if [join-name] not specified, stay the same, which is:

 * [previous-processor-name]-repartition for both Stream-Stream (S-S)
>>> join
 and S-T join

 I think it is more natural to rename it to

 * [app-id]-[join-name]-left-repartition

 * [app-id]-[join-name]-right-repartition


 2. I'd suggest to use the name to also define the corresponding processor
 names accordingly, in addition to the repartition topic names. Note that
 for joins, this may be overlapping with KIP-307
 >> 307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL>
 as
 it also have proposals for defining processor names for join operators as
 well.

 3. Could you also specify how this would affect the optimization for
 merging multiple repartition topics?

 4. In the "Compatibility, Deprecation, and Migration Plan" section, could
 you also mention the following scenarios, if any of the upgrade path
>>> would
 be changed:

  a) changing user DSL code: under which scenarios users can now do a
 rolling bounce instead of resetting applications.

  b) upgrading from older version to new version, with all the names
 specified, and with optimization turned on. E.g. say we have the code
 written in 2.1 with all names specified, and now upgrading to 2.2 with
>>> new
 optimizations that may potentially change the repartition topics. Is that
 always safe to do?



 Guozhang


 On Wed, Sep 12, 2018 at 4:52 PM, Bill Bejeck  wrote:

> All I'd like to start a discussion on KIP-372 for the naming of joins
>>> and
> grouping operations in Kafka Streams.
>
> The KIP page can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 372%3A+Naming+Joins+and+Grouping
>
> I look forward to feedback and comments.
>
> Thanks,
> Bill
>



Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-13 Thread Matthias J. Sax
Thanks for the KIP.

+1 (binding)


-Matthias

On 9/5/18 8:52 AM, John Roesler wrote:
> I'm a +1 (non-binding)
> 
> On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov  wrote:
> 
>> Dear commiters.
>>
>> Please, vote on a KIP.
>>
>> В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
>>> Hi Nikolay,
>>>
>>> You can start a PR any time, but we cannot per it (and probably won't do
>>> serious reviews) until after the KIP is voted and approved.
>>>
>>> Sometimes people start a PR during discussion just to help provide more
>>> context, but it's not required (and can also be distracting because the
>> KIP
>>> discussion should avoid implementation details).
>>>
>>> Let's wait one more day for any other comments and plan to start the vote
>>> on Monday if there are no other debates.
>>>
>>> Once you start the vote, you have to leave it up for at least 72 hours,
>> and
>>> it requires 3 binding votes to pass. Only Kafka Committers have binding
>>> votes (https://kafka.apache.org/committers).
>>>
>>> Thanks,
>>> -John
>>>
>>> On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck  wrote:
>>>
 Hi Nickolay,

 Thanks for the clarification.

 -Bill

 On Fri, Aug 31, 2018 at 11:59 AM Nikolay Izhikov 
 wrote:

> Hello, John.
>
> This is my first KIP, so, please, help me with kafka development
>> process.
>
> Should I start to work on PR now? Or should I wait for a "+1" from
> commiters?
>
> В Пт, 31/08/2018 в 10:33 -0500, John Roesler пишет:
>> I see. I guess that once we are in the PR-reviewing phase, we'll
>> be in

 a
>> better position to see what else can/should be done, and we can
>> talk
>
> about
>> follow-on work at that time.
>>
>> Thanks for the clarification,
>> -John
>>
>> On Fri, Aug 31, 2018 at 1:19 AM Nikolay Izhikov <
>> nizhi...@apache.org>
>
> wrote:
>>
>>> Hello, Bill
>>>
 In the "Proposed Changes" section, there is "Try to reduce the
>>>
>>> visibility of methods in next tickets" does that mean eventual
>
> deprecation
>>> and removal?
>>>
>>> 1. Some methods will become deprecated. I think they will be
>> removed

 in
>>> the future.
>>> You can find list of deprecated methods in KIP.
>>>
>>> 2. Some internal methods can't be deprecated or hid from the
>> user for
>
> now.
>>> I was trying to say that we should research possibility to reduce
>>> visibility of *internal* methods that are *public* now.
>>> That kind of changes is out of the scope of current KIP, so we
>> have

 to
> do
>>> it in the next tickets.
>>>
>>> I don't expect that internal methods will be removed.
>>>
>>> В Чт, 30/08/2018 в 18:59 -0400, Bill Bejeck пишет:
 Sorry for chiming in late, there was a lot of detail to catch
>> up

 on.

 Overall I'm +1 in the KIP.  But I do have one question about
>> the

 KIP
> in
 regards to Matthias's comments about defining dual use.

 In the "Proposed Changes" section, there is "Try to reduce the
>
> visibility
 of methods in next tickets" does that mean eventual
>> deprecation and
>>>
>>> removal?
 I thought we were aiming to keep the dual use methods? Or does
>> that
>
> imply
 we'll strive for more clear delineation between DSL and
>> internal

 use?

 Thanks,
 Bill



 On Thu, Aug 30, 2018 at 5:59 PM Nikolay Izhikov <

 nizhi...@apache.org
>>
>>>
>>> wrote:

> John, thank you.
>
> I've updated KIP.
>
>
> Dear commiters, please take a look and share your opinion.
>
> В Чт, 30/08/2018 в 14:58 -0500, John Roesler пишет:
>> Oh! I missed one minor thing: UnlimitedWindows doesn't
>> need to
>
> set
>>>
>>> grace
>> (it currently does not either).
>>
>> Otherwise, it looks good to me!
>>
>> Thanks so much,
>> -John
>>
>> On Thu, Aug 30, 2018 at 5:30 AM Nikolay Izhikov <
>
> nizhi...@apache.org
>
> wrote:
>>
>>> Hello, John.
>>>
>>> I've updated KIP according on your comments.
>>> Please, take a look.
>>>
>>> Are we ready to vot now?
>>>
>>> В Ср, 29/08/2018 в 14:51 -0500, John Roesler пишет:
 Hey Nikolay, sorry for the silence. I'm taking another
>> look
>
> at
>>>
>>> the
>
> KIP
 before voting...


1. I think the Window constructor should actually be
>>>
>>> protected. I
>>>
>>> don't
know if we need a constructor that takes Instant,
>> but if
>
> we
>>>
>>> do add

Build failed in Jenkins: kafka-trunk-jdk10 #482

2018-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update minimum required Gradle version to 4.7 (#5642)

--
[...truncated 2.22 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-13 Thread Ron Dagostino
Hi Rajini.  I'm thinking about how we deal with migration.   For example,
let's say we have an existing 2.0.0 cluster using SASL/OAUTHBEARER and we
want to use this feature.  The desired end state is to have all clients and
brokers migrated to a version that supports the feature (call it 2.x) with
the feature turned on.  We need to document the supported path(s) to this
end state.

Here are a couple of scenarios with implications:

1) *Migrate client to 2.x and turn the feature on but broker is still at
2.0.*  In this case the client is going to get an error when it sends the
SaslHandshakeRequest.  We want to avoid this.  It seems to me the client
needs to know if the broker has been upgraded to the necessary version
before trying to re-authenticate, which makes me believe we need to bump
the API version number in 2.x and the client has to check to see if the max
version the broker supports meets a minimum standard before trying to
re-authenticate.  Do you agree?

2) *Migrate broker to 2.x and turn the feature on but client is still at
2.0*.  In this case the broker is going to end up killing connections
periodically.   Again, we want to avoid this.  It's tempting to say "don't
do this" but I wonder if we can require installations to upgrade all
clients before turning the feature on at the brokers.  Certainly we can say
"don't do this" for inter-broker clients -- in other words, we can say that
all brokers in a cluster should be upgraded before the feature is turned on
for any one of them -- but I don't know about our ability to say it for
non-broker clients.

Now we consider the cases where both the brokers and the clients have been
upgraded to 2.x.  When and where should the feature be turned on?  The
inter-broker case is interesting because I don't think think we can require
installations to restart every broker with a new config where the feature
is turned on at the same time.  Do you agree that we cannot require this
and therefore must support installations restarting brokers with a new
config at whatever pace they need -- which may be quite slow relative to
token lifetimes?  Assuming you do agree, then there is going to be the case
where some brokers are going to have the feature turned on and some clients
(definitely inter-broker clients at a minimum) are going to have the
feature turned off.

The above discussion assumes a single config on the broker side that turns
on both the inter-broker client re-authentication feature as well as the
server-side expired-connection-kill feature, but now I'm thinking we need
the ability to turn these features on independently, plus maybe we need a
way to monitor the number of "active, expired" connections to the server so
that operators can be sure that all clients have been upgraded/enabled
before turning on the server-side expired-connection-kill feature.

So the migration plan would be as follows:

1) Upgrade all brokers to 2.x.
2) After all brokers are upgraded, turn on re-authentication for all
brokers at whatever rate is desired -- just eventually, at some point, get
the client-side feature turned on for all brokers so that inter-broker
connections are re-authenticating.
3) In parallel with (1) and (2) above, upgrade clients to 2.x and turn
their re-authentication feature on.  Clients will check the API version and
only re-authenticate to a broker that has also been upgraded (note that the
ability of a broker to respond to a re-authentication cannot be turned off
-- it is always on beginning with version 2.x, and it just sits there doing
nothing if it isn't exercised by an enabled client)
4) After (1), (2), and (3) are complete, check the 2.x broker metrics to
confirm that there are no "active, expired" connections.  Once you are
satisfied that (1), (2), and (3) are indeed complete you can enable the
server-side expired-connection-kill feature on each broker via a restart at
whatever pace is desired.

Comments?

Ron


On Wed, Sep 12, 2018 at 4:48 PM Ron Dagostino  wrote:

> Ok, I am tempted to just say we go with the low-level approach since it is
> the quickest and seems to meet the clear requirements. We can always adjust
> later if we get to clarity on other requirements or we decide we need to
> approach it differently for whatever reason.  But in the meantime, before
> fully committing to this decision, I would appreciate another perspective
> if someone has one.
>
> Ron
>
> On Wed, Sep 12, 2018 at 3:15 PM Rajini Sivaram 
> wrote:
>
>> Hi Ron,
>>
>> Yes, I would leave out retries from this KIP for now. In the future, if
>> there is a requirement for supporting retries, we can consider it. I think
>> we can support retries with either approach if we needed to, but it would
>> be better to do it along with other changes required to support
>> authentication servers that are not highly available.
>>
>> For maintainability, I am biased, so it will be good to get the
>> perspective
>> of others in the community :-)
>>
>> On Wed, Sep 12, 2018 at 7:47 PM, 

Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-13 Thread Matthias J. Sax
No need to start a new voting thread :)

For the KIP update, I think it should be:

> ReadOnlyWindowStore {
> //Deprecated methods.
> WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
> timeTo);
> KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
>  
> //New methods.
> WindowStoreIterator fetch(K key, Instant from, Duration duration) 
> throws IllegalArgumentException;
> KeyValueIterator, V> fetch(K from, K to, Instant from, 
> Duration duration) throws IllegalArgumentException;
> KeyValueIterator, V> fetchAll(Instant from, Duration 
> duration) throws IllegalArgumentException;
> }
>  
>  
> WindowStore {
> //New methods.
> WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
> timeTo);
> KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
> }

Ie, long-versions are replaced with Instant/Duration in
`ReadOnlyWindowStore`, and `long` method are added in `WindowStore` --
this way, we effectively "move" the long-versions from
`ReadOnlyWindowStore` to `WindowStore`.

-Matthias

On 9/13/18 8:08 AM, Nikolay Izhikov wrote:
> Hello, Matthias.
> 
>> I like the KIP as-is. Feel free to start a VOTE thread.
> 
> 
> I'm already started one [1].
> Can you vote in it or I should create a new one?
> 
> 
> I've updated KIP.
> This has been changed:
> 
> ReadOnlyWindowStore {
> //Deprecated methods.
> WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> KeyValueIterator, V> fetch(K from, K to, long timeFrom, long 
> timeTo);
> KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
> }
> 
> WindowStore {
>   //New methods.
> WindowStoreIterator fetch(K key, Instant from, Duration duration) 
> throws IllegalArgumentException;
> KeyValueIterator, V> fetch(K from, K to, Instant from, 
> Duration duration) throws IllegalArgumentException;
> KeyValueIterator, V> fetchAll(Instant from, Duration 
> duration) throws IllegalArgumentException;
> }
> 
> [1] 
> https://lists.apache.org/thread.html/e976352e7e42d459091ee66ac790b6a0de7064eac0c57760d50c983b@%3Cdev.kafka.apache.org%3E
> 
> В Ср, 12/09/2018 в 15:46 -0700, Matthias J. Sax пишет:
>> Great!
>>
>> I did not double check the ReadOnlySessionStore interface before, and
>> just assumed it would take a timestamp, too. My bad.
>>
>> Please update the KIP for ReadOnlyWindowStore and WindowStore.
>>
>> I like the KIP as-is. Feel free to start a VOTE thread. Even if there
>> might be minor follow up comments, we can vote in parallel.
>>
>>
>> -Matthias
>>
>> On 9/12/18 1:06 PM, John Roesler wrote:
>>> Hi Nikolay,
>>>
>>> Yes, the changes we discussed for ReadOnlyXxxStore and XxxStore should be
>>> in this KIP.
>>>
>>> And you're right, it seems like ReadOnlySessionStore is not necessary to
>>> touch, since it doesn't reference any `long` timestamps.
>>>
>>> Thanks,
>>> -John
>>>
>>> On Wed, Sep 12, 2018 at 4:36 AM Nikolay Izhikov  wrote:
>>>
 Hello, Matthias.

> His proposal is, to deprecate existing methods on `ReadOnlyWindowStore`>

 and `ReadOnlySessionStore` and add them to `WindowStore` and> 
 `SessionStore`
> Does this make sense?

 You both are experienced Kafka developers, so yes, it does make a sense to
 me :).
 Do we want to make this change in KIP-358 or it required another KIP?

> Btw: the KIP misses `ReadOnlySessionStore` atm.

 Sorry, but I don't understand you.
 As far as I can see, there is only 2 methods in `ReadOnlySessionStore`.
 Which method should be migrated to Duration?


 https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlySessionStore.java

 В Вт, 11/09/2018 в 09:21 -0700, Matthias J. Sax пишет:
> I talked to John offline about his last suggestions, that I originally
> did not fully understand.
>
> His proposal is, to deprecate existing methods on `ReadOnlyWindowStore`
> and `ReadOnlySessionStore` and add them to `WindowStore` and
> `SessionStore` (note, all singular -- not to be confused with classes
> names plural).
>
> Btw: the KIP misses `ReadOnlySessionStore` atm.
>
> The argument is, that the `ReadOnlyXxxStore` interfaces are only exposed
> via Interactive Queries feature and for this part, using `long` is
> undesired. However, for a `Processor` that reads/writes stores on the
> hot code path, we would like to avoid the object creation overhead and
> stay with `long`. Note, that a `Processor` would use the "read-write"
> interfaces and thus, we can add the more efficient read methods using
> `long` there.
>
> Does this make sense?
>
>
> -Matthias
>
> On 9/11/18 12:20 AM, Nikolay Izhikov wrote:
>> Hello, Guozhang, Bill.
>>
>>> 1) I'd suggest keeping 

Re: [DISCUSS] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-13 Thread Nikolay Izhikov
Fixed. 

Thanks, for help!

Please, take a look and vote.

В Чт, 13/09/2018 в 08:40 -0700, Matthias J. Sax пишет:
> No need to start a new voting thread :)
> 
> For the KIP update, I think it should be:
> 
> > ReadOnlyWindowStore {
> > //Deprecated methods.
> > WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> > KeyValueIterator, V> fetch(K from, K to, long timeFrom, 
> > long timeTo);
> > KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
> >  
> > //New methods.
> > WindowStoreIterator fetch(K key, Instant from, Duration duration) 
> > throws IllegalArgumentException;
> > KeyValueIterator, V> fetch(K from, K to, Instant from, 
> > Duration duration) throws IllegalArgumentException;
> > KeyValueIterator, V> fetchAll(Instant from, Duration 
> > duration) throws IllegalArgumentException;
> > }
> >  
> >  
> > WindowStore {
> > //New methods.
> > WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> > KeyValueIterator, V> fetch(K from, K to, long timeFrom, 
> > long timeTo);
> > KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
> > }
> 
> Ie, long-versions are replaced with Instant/Duration in
> `ReadOnlyWindowStore`, and `long` method are added in `WindowStore` --
> this way, we effectively "move" the long-versions from
> `ReadOnlyWindowStore` to `WindowStore`.
> 
> -Matthias
> 
> On 9/13/18 8:08 AM, Nikolay Izhikov wrote:
> > Hello, Matthias.
> > 
> > > I like the KIP as-is. Feel free to start a VOTE thread.
> > 
> > 
> > I'm already started one [1].
> > Can you vote in it or I should create a new one?
> > 
> > 
> > I've updated KIP.
> > This has been changed:
> > 
> > ReadOnlyWindowStore {
> > //Deprecated methods.
> > WindowStoreIterator fetch(K key, long timeFrom, long timeTo);
> > KeyValueIterator, V> fetch(K from, K to, long timeFrom, 
> > long timeTo);
> > KeyValueIterator, V> fetchAll(long timeFrom, long timeTo);
> > }
> > 
> > WindowStore {
> > //New methods.
> > WindowStoreIterator fetch(K key, Instant from, Duration duration) 
> > throws IllegalArgumentException;
> > KeyValueIterator, V> fetch(K from, K to, Instant from, 
> > Duration duration) throws IllegalArgumentException;
> > KeyValueIterator, V> fetchAll(Instant from, Duration 
> > duration) throws IllegalArgumentException;
> > }
> > 
> > [1] 
> > https://lists.apache.org/thread.html/e976352e7e42d459091ee66ac790b6a0de7064eac0c57760d50c983b@%3Cdev.kafka.apache.org%3E
> > 
> > В Ср, 12/09/2018 в 15:46 -0700, Matthias J. Sax пишет:
> > > Great!
> > > 
> > > I did not double check the ReadOnlySessionStore interface before, and
> > > just assumed it would take a timestamp, too. My bad.
> > > 
> > > Please update the KIP for ReadOnlyWindowStore and WindowStore.
> > > 
> > > I like the KIP as-is. Feel free to start a VOTE thread. Even if there
> > > might be minor follow up comments, we can vote in parallel.
> > > 
> > > 
> > > -Matthias
> > > 
> > > On 9/12/18 1:06 PM, John Roesler wrote:
> > > > Hi Nikolay,
> > > > 
> > > > Yes, the changes we discussed for ReadOnlyXxxStore and XxxStore should 
> > > > be
> > > > in this KIP.
> > > > 
> > > > And you're right, it seems like ReadOnlySessionStore is not necessary to
> > > > touch, since it doesn't reference any `long` timestamps.
> > > > 
> > > > Thanks,
> > > > -John
> > > > 
> > > > On Wed, Sep 12, 2018 at 4:36 AM Nikolay Izhikov  
> > > > wrote:
> > > > 
> > > > > Hello, Matthias.
> > > > > 
> > > > > > His proposal is, to deprecate existing methods on 
> > > > > > `ReadOnlyWindowStore`>
> > > > > 
> > > > > and `ReadOnlySessionStore` and add them to `WindowStore` and> 
> > > > > `SessionStore`
> > > > > > Does this make sense?
> > > > > 
> > > > > You both are experienced Kafka developers, so yes, it does make a 
> > > > > sense to
> > > > > me :).
> > > > > Do we want to make this change in KIP-358 or it required another KIP?
> > > > > 
> > > > > > Btw: the KIP misses `ReadOnlySessionStore` atm.
> > > > > 
> > > > > Sorry, but I don't understand you.
> > > > > As far as I can see, there is only 2 methods in 
> > > > > `ReadOnlySessionStore`.
> > > > > Which method should be migrated to Duration?
> > > > > 
> > > > > 
> > > > > https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlySessionStore.java
> > > > > 
> > > > > В Вт, 11/09/2018 в 09:21 -0700, Matthias J. Sax пишет:
> > > > > > I talked to John offline about his last suggestions, that I 
> > > > > > originally
> > > > > > did not fully understand.
> > > > > > 
> > > > > > His proposal is, to deprecate existing methods on 
> > > > > > `ReadOnlyWindowStore`
> > > > > > and `ReadOnlySessionStore` and add them to `WindowStore` and
> > > > > > `SessionStore` (note, all singular -- not to be confused with 
> > > > > > classes
> > > > > > names plural).
> > > > > > 
> > > > > > Btw: the KIP misses `ReadOnlySessionStore` atm.
> > > > > > 
> > > > > > The argument is, that the 

[jira] [Resolved] (KAFKA-6926) Reduce NPath exceptions in Connect

2018-09-13 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6926.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Reduce NPath exceptions in Connect
> --
>
> Key: KAFKA-6926
> URL: https://issues.apache.org/jira/browse/KAFKA-6926
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.1.0
>
>
> The [recent upgrade of Checkstyle and move to Java 
> 8|https://github.com/apache/kafka/pull/5046/files/1a83b7d04bd3cecbb68d033211b1bcdfbe085d47#diff-6869a8c771257f000c3cefb045e9d289]
>  has caused some existing code to not pass the NPath rule. Look at the 
> classes to see what might need to be changed to remove the class from the 
> suppression rule:
> AbstractStatus|ConnectRecord|ConnectSchema|DistributedHerder|FileStreamSourceTask|JsonConverter|KafkaConfigBackingStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #483

2018-09-13 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-349 Priorities for Source Topics

2018-09-13 Thread Matthias J. Sax
I think, we can keep incremental fetch request with the design idea I
described in my previous email. Of course, brokers would need to be
updated and understand topic priorities, ie, we would also need to
change the protocol to send topic priority information to the brokers.

Thus, if somebody only updates clients and uses topic priorities, it
would need to fall back to full fetch requests until brokers are
updated, too.

Thoughts?


-Matthias

On 9/10/18 11:08 AM, Colin McCabe wrote:
> Oh, also, I am -1 on disabling incremental fetch requests when the 
> prioritization feature is used.  Users often have performance problems that 
> are difficult to understand when they use various combinations of features.  
> Of course as implmentors we "know" what the right combination of features is 
> to get good performance.  But this is not intuitive to our users.  If we 
> support a feature, it should not cause non-obvious performance regressions.
> 
> best,
> Colin
> 
> 
> On Mon, Sep 10, 2018, at 11:05, Colin McCabe wrote:
>> On Thu, Sep 6, 2018, at 05:24, Jan Filipiak wrote:
>>>
>>> On 05.09.2018 17:18, Colin McCabe wrote:
 Hi all,

 I agree that DISCUSS is more appropriate than VOTE at this point, since I 
 don't remember the last discussion coming to a definite conclusion.

 I guess my concern is that this will add complexity and memory consumption 
 on the server side.  In the case of incremental fetch requests, we will 
 have to track at least two extra bytes per partition, to know what the 
 priority of each partition is within each active fetch session.

 It would be nice to hear more about the use-cases for this feature.  I 
 think Gwen asked about this earlier, and I don't remember reading a 
 response.  The fact that we're now talking about Samza interfaces is a bit 
 of a red flag.  After all, Samza didn't need partition priorities to do 
 what it did.  You can do a lot with muting partitions and using 
 appropriate threading in your code.
>>> to show a usecase, I linked 353, especially since the threading model is 
>>> pretty fixed there.
>>>
>>> No clue why Samza should be a red flag. They handle purely on the 
>>> consumer side. Which I think is reasonable. I would not try to implement 
>>> any broker side support for this, if I were todo it. Just don't support 
>>> incremental fetch then.
>>> In the end if you have broker side support, you would need to ship the 
>>> logic of the message chooser to the broker. I don't think that will 
>>> allow for the flexibility I had in mind on purely consumer based 
>>> implementation.
>>
>> The reason why Samza is a red flag here is that if Samza can do a thing, 
>> any Kafka client can do that thing too, and no special broker support is 
>> needed.  In that case, perhaps what we need is better documentation, 
>> rather than any new features or code in Kafka itself.  Also, it's not 
>> clear why we should duplicate Samza in Kafka.  Users who want to use 
>> Samza can still simply use Samza, or one of the various opinionated 
>> frameworks like Kafka Streams, KSQL, etc. etc.
>>
>> In general I think we should not continue this discussion until we have 
>> some examples of concrete and specific use-cases.
>>
>> best,
>> Colin
>>
>>
>>>
>>>

 For example, you can hand data from a partition off to a work queue with a 
 fixed size, which is handled by a separate service thread.  If the queue 
 gets full, you can mute the partition until some of the buffered data is 
 processed.  Kafka Streams uses a similar approach to avoid reading 
 partition data that isn't immediately needed.

 There might be some use-cases that need priorities eventually, but I'm 
 concerned that we're jumping the gun by trying to implement this before we 
 know what they are.

 best,
 Colin


 On Wed, Sep 5, 2018, at 01:06, Jan Filipiak wrote:
> On 05.09.2018 02:38, n...@afshartous.com wrote:
>>> On Sep 4, 2018, at 4:20 PM, Jan Filipiak  
>>> wrote:
>>>
>>> what I meant is litterally this interface:
>>>
>>> https://samza.apache.org/learn/documentation/0.7.0/api/javadocs/org/apache/samza/system/chooser/MessageChooser.html
>>>  
>>> 
>> Hi Jan,
>>
>> Thanks for the reply and I have a few questions.  This Samza doc
>>
>> 
>> https://samza.apache.org/learn/documentation/0.14/container/streams.html 
>> 
>>
>> indicates that the chooser is set via configuration.  Are you suggesting 
>> adding a new configuration for Kafka ?  Seems like we could also have a 
>> method on KafkaConsumer
>>
>>   public void register(MessageChooser messageChooser)
> I don't have strong opinions regarding 

Re: [VOTE] KIP-349 Priorities for Source Topics

2018-09-13 Thread Matthias J. Sax
That sound correct, Colin.

At runtime (we just merged an improvement this week, cf KIP-353), Kafka
Streams synchronizes different topics based on record timestamps.
Records are buffered internally before processed and we `pause()`
partitions for which the number of records in the buffer exceeds a
configurable threshold (`buffered.records.per.partition` parameter).

Timestamp based synchronization (ie, message choosing) is essential for
DSL semantics. A custom MessageChosser might break DSL operator semantics.

Having said this, there might be use cases for with timestamps based
synchronization is not desired. However, I would assume that this would
be a Processor API level feature, not a DSL level feature.

Hence, offering something similar to MessageChooser interface at
Processor API level that is leveraged at runtime might make sense. For
this case, the DSL would plug-in its timestamp based synchronization
strategy. Take this with a grain of salt though. I have not thought this
through and it might actually not be possible to express the needed
timestamp synchronization with a MessageChooser interface.


-Matthias

On 9/10/18 10:54 AM, Colin McCabe wrote:
> Hmm.  My understanding is that streams doesn't need anything like this since 
> streams pauses topics when it doesn't need more data from them.  (Matthias, 
> can you confirm?)
> 
> best,
> Colin
> 
> 
> On Mon, Aug 20, 2018, at 06:01, Thomas Becker wrote:
>> I agree with Jan. A strategy interface for choosing processing order is 
>> nice, and would hopefully be a step towards getting this in streams.
>>
>> -Tommy
>>
>> On Mon, 2018-08-20 at 12:52 +0200, Jan Filipiak wrote:
>>
>> On 20.08.2018 00:19, Matthias J. Sax wrote:
>>
>> @Nick: A KIP is only accepted if it got 3 binding votes, ie, votes from
>>
>> committers. If you close the vote before that, the KIP would not be
>>
>> accepted. Note that committers need to pay attention to a lot of KIPs
>>
>> and it can take a while until people can look into it. Thanks for your
>>
>> understanding.
>>
>>
>> @Jan: Can you give a little bit more context on your concerns? It's
>>
>> unclear why you mean atm.
>>
>> Just saying that we should peek at the Samza approach, it's a much more
>>
>> powerful abstraction. We can ship a default MessageChooser
>>
>> that looks at the topics priority.
>>
>> @Adam: anyone can vote :)
>>
>>
>>
>>
>> -Matthias
>>
>>
>> On 8/19/18 9:58 AM, Adam Bellemare wrote:
>>
>> While I am not sure if I can or can’t vote, my question re: Jan’s 
>> comment is, “should we be implementing it as Samza does?”
>>
>>
>> I am not familiar with the drawbacks of the current approach vs how 
>> samza does it.
>>
>>
>> On Aug 18, 2018, at 5:06 PM, 
>> n...@afshartous.com wrote:
>>
>>
>>
>> I only saw one vote on KIP-349, just checking to see if anyone else 
>> would like to vote before closing this out.
>>
>> --
>>
>>   Nick
>>
>>
>>
>> On Aug 13, 2018, at 9:19 PM, 
>> n...@afshartous.com wrote:
>>
>>
>>
>> Hi All,
>>
>>
>> Calling for a vote on KIP-349
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
>>
>>
>> --
>>
>>  Nick
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>>
>> This email and any attachments may contain confidential and privileged 
>> material for the sole use of the intended recipient. Any review, 
>> copying, or distribution of this email (or any attachments) by others is 
>> prohibited. If you are not the intended recipient, please contact the 
>> sender immediately and permanently delete this email and any 
>> attachments. No employee or agent of TiVo Inc. is authorized to conclude 
>> any binding agreement on behalf of TiVo Inc. by email. Binding 
>> agreements with TiVo Inc. may only be made by a signed written 
>> agreement.



signature.asc
Description: OpenPGP digital signature