[GitHub] [kafka-site] showuon commented on pull request #284: MINOR: Change the arrow direction based on the view state is expanded or not

2020-08-10 Thread GitBox


showuon commented on pull request #284:
URL: https://github.com/apache/kafka-site/pull/284#issuecomment-671740187


   @guozhangwang , I think this PR is good to merge except you have other 
opinions. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Jenkins build is back to normal : kafka-2.5-jdk8 #176

2020-08-10 Thread Apache Jenkins Server
See 




[GitHub] [kafka-site] guozhangwang commented on a change in pull request #287: fixes for intro page and various docs page headings

2020-08-10 Thread GitBox


guozhangwang commented on a change in pull request #287:
URL: https://github.com/apache/kafka-site/pull/287#discussion_r468191760



##
File path: intro.html
##
@@ -20,7 +20,203 @@ Introduction
   
 
 
-
+

Review comment:
   Why we have to copy-paste now? The old way is more convienent since we 
do not need to copy-paste everytime we update the newest released doc.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] guozhangwang merged pull request #287: fixes for intro page and various docs page headings

2020-08-10 Thread GitBox


guozhangwang merged pull request #287:
URL: https://github.com/apache/kafka-site/pull/287


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: kafka-trunk-jdk14 #350

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve checks for CogroupedStreamAggregateBuilder (#9141)


--
[...truncated 3.22 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #4774

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve checks for CogroupedStreamAggregateBuilder (#9141)


--
[...truncated 3.20 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task 

Build failed in Jenkins: kafka-2.4-jdk8 #238

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Ensure a single version of scala-library is used (#9155)


--
[...truncated 2.77 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED


Re: [VOTE] KIP-612: Ability to Limit Connection Creation Rate on Brokers

2020-08-10 Thread Anna Povzner
Hi All,

I wanted to let everyone know that we would like to make the following
changes to the KIP:

   1.

   Expose connection acceptance rate metrics (broker-wide and per-listener)
   and per-listener average throttle time metrics for better observability and
   debugging.
   2.

   KIP-599 introduced a new implementation of MeasurableStat that
   implements a token bucket, which improves rate throttling for bursty
   workloads (KAFKA-10162). We would like to use this same mechanism for
   connection accept rate throttling.


I updated the KIP to reflect these changes.

Let me know if you have any concerns.

Thanks,

Anna


On Thu, May 21, 2020 at 5:42 PM Anna Povzner  wrote:

> The vote for KIP-612 has passed with 3 binding and 3 non-binding +1s, and
> no objections.
>
>
> Thanks everyone for reviews and feedback,
>
> Anna
>
> On Tue, May 19, 2020 at 2:41 AM Rajini Sivaram 
> wrote:
>
>> +1 (binding)
>>
>> Thanks for the KIP, Anna!
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Tue, May 19, 2020 at 9:32 AM Alexandre Dupriez <
>> alexandre.dupr...@gmail.com> wrote:
>>
>> > +1 (non-binding)
>> >
>> > Thank you for the KIP!
>> >
>> >
>> > Le mar. 19 mai 2020 à 07:57, David Jacot  a écrit
>> :
>> > >
>> > > +1 (non-binding)
>> > >
>> > > Thanks for the KIP, Anna!
>> > >
>> > > On Tue, May 19, 2020 at 7:12 AM Satish Duggana <
>> satish.dugg...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > +1 (non-binding)
>> > > > Thanks Anna for the nice feature to control the connection creation
>> > rate
>> > > > from the clients.
>> > > >
>> > > > On Tue, May 19, 2020 at 8:16 AM Gwen Shapira 
>> > wrote:
>> > > >
>> > > > > +1 (binding)
>> > > > >
>> > > > > Thank you for driving this, Anna
>> > > > >
>> > > > > On Mon, May 18, 2020 at 4:55 PM Anna Povzner 
>> > wrote:
>> > > > >
>> > > > > > Hi All,
>> > > > > >
>> > > > > > I would like to start the vote on KIP-612: Ability to limit
>> > connection
>> > > > > > creation rate on brokers.
>> > > > > >
>> > > > > > For reference, here is the KIP wiki:
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-612%3A+Ability+to+Limit+Connection+Creation+Rate+on+Brokers
>> > > > > >
>> > > > > > And discussion thread:
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> >
>> https://lists.apache.org/thread.html/r61162661fa307d0bc5c8326818bf223a689c49e1c828c9928ee26969%40%3Cdev.kafka.apache.org%3E
>> > > > > >
>> > > > > > Thanks,
>> > > > > >
>> > > > > > Anna
>> > > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Gwen Shapira
>> > > > > Engineering Manager | Confluent
>> > > > > 650.450.2760 | @gwenshap
>> > > > > Follow us: Twitter | blog
>> > > > >
>> > > >
>> >
>>
>


Jenkins build is back to normal : kafka-2.6-jdk8 #108

2020-08-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk11 #1698

2020-08-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4773

2020-08-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk14 #349

2020-08-10 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Konstantine Karantasis
Congrats, John!

-Konstantine

On Mon, Aug 10, 2020 at 4:53 PM Gwen Shapira  wrote:

> Congratulations, John!
>
> On Mon, Aug 10, 2020, 1:11 PM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> remained
> > active in the community since becoming a committer. It's my pleasure to
> > announce that John is now a member of Kafka PMC.
> >
> > Congratulations John!
> >
> > Jun
> > on behalf of Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Gwen Shapira
Congratulations, John!

On Mon, Aug 10, 2020, 1:11 PM Jun Rao  wrote:

> Hi, Everyone,
>
> John Roesler has been a Kafka committer since Nov. 5, 2019. He has remained
> active in the community since becoming a committer. It's my pleasure to
> announce that John is now a member of Kafka PMC.
>
> Congratulations John!
>
> Jun
> on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Boyang Chen
Congrats Mr. John!


On Mon, Aug 10, 2020 at 3:02 PM Adam Bellemare 
wrote:

> Congratulations John! You have been an excellent help to me and many
> others. I am pleased to see this!
>
> > On Aug 10, 2020, at 5:54 PM, Bill Bejeck  wrote:
> >
> > Congrats!
> >
> >> On Mon, Aug 10, 2020 at 4:52 PM Guozhang Wang 
> wrote:
> >>
> >> Congratulations!
> >>
> >>> On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
> >>>
> >>> Hi, Everyone,
> >>>
> >>> John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> >> remained
> >>> active in the community since becoming a committer. It's my pleasure to
> >>> announce that John is now a member of Kafka PMC.
> >>>
> >>> Congratulations John!
> >>>
> >>> Jun
> >>> on behalf of Apache Kafka PMC
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
>


Build failed in Jenkins: kafka-trunk-jdk11 #1697

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Ensure a single version of scala-library is used (#9155)


--
[...truncated 3.22 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4772

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Ensure a single version of scala-library is used (#9155)


--
[...truncated 3.19 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Adam Bellemare
Congratulations John! You have been an excellent help to me and many others. I 
am pleased to see this!

> On Aug 10, 2020, at 5:54 PM, Bill Bejeck  wrote:
> 
> Congrats!
> 
>> On Mon, Aug 10, 2020 at 4:52 PM Guozhang Wang  wrote:
>> 
>> Congratulations!
>> 
>>> On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
>>> 
>>> Hi, Everyone,
>>> 
>>> John Roesler has been a Kafka committer since Nov. 5, 2019. He has
>> remained
>>> active in the community since becoming a committer. It's my pleasure to
>>> announce that John is now a member of Kafka PMC.
>>> 
>>> Congratulations John!
>>> 
>>> Jun
>>> on behalf of Apache Kafka PMC
>>> 
>> 
>> 
>> --
>> -- Guozhang
>> 


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Bill Bejeck
Congrats!

On Mon, Aug 10, 2020 at 4:52 PM Guozhang Wang  wrote:

> Congratulations!
>
> On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> remained
> > active in the community since becoming a committer. It's my pleasure to
> > announce that John is now a member of Kafka PMC.
> >
> > Congratulations John!
> >
> > Jun
> > on behalf of Apache Kafka PMC
> >
>
>
> --
> -- Guozhang
>


[GitHub] [kafka-site] vvcephei merged pull request #288: Update John's profile

2020-08-10 Thread GitBox


vvcephei merged pull request #288:
URL: https://github.com/apache/kafka-site/pull/288


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Guozhang Wang
Congratulations!

On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:

> Hi, Everyone,
>
> John Roesler has been a Kafka committer since Nov. 5, 2019. He has remained
> active in the community since becoming a committer. It's my pleasure to
> announce that John is now a member of Kafka PMC.
>
> Congratulations John!
>
> Jun
> on behalf of Apache Kafka PMC
>


-- 
-- Guozhang


[GitHub] [kafka-site] guozhangwang commented on pull request #288: Update John's profile

2020-08-10 Thread GitBox


guozhangwang commented on pull request #288:
URL: https://github.com/apache/kafka-site/pull/288#issuecomment-671583692


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] vvcephei opened a new pull request #288: Update John's profile

2020-08-10 Thread GitBox


vvcephei opened a new pull request #288:
URL: https://github.com/apache/kafka-site/pull/288


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: kafka-2.5-jdk8 #175

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] Bump version to 2.5.1

[vvcephei] MINOR: Update 2.5 branch version to 2.5.2-SNAPSHOT


--
[...truncated 5.92 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED


[ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-10 Thread Jun Rao
Hi, Everyone,

John Roesler has been a Kafka committer since Nov. 5, 2019. He has remained
active in the community since becoming a committer. It's my pleasure to
announce that John is now a member of Kafka PMC.

Congratulations John!

Jun
on behalf of Apache Kafka PMC


[jira] [Created] (KAFKA-10384) Separate converters from generated messages

2020-08-10 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-10384:


 Summary: Separate converters from generated messages
 Key: KAFKA-10384
 URL: https://issues.apache.org/jira/browse/KAFKA-10384
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe
Assignee: Colin McCabe


Separate the JSON converter classes from the message classes, so that the 
clients module can be used without Jackson on the CLASSPATH.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10383) KTable Join on Foreign key is opinionated

2020-08-10 Thread Marco Lotz (Jira)
Marco Lotz created KAFKA-10383:
--

 Summary: KTable Join on Foreign key is opinionated 
 Key: KAFKA-10383
 URL: https://issues.apache.org/jira/browse/KAFKA-10383
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.4.1
Reporter: Marco Lotz


*Status Quo:*
The current implementation of [KIP-213 
|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable]]
 of Foreign Key Join between two KTables is _opinionated_ in terms of storage 
layer.

Independently of the Materialization method provided in the method argument, it 
generates an intermediary RocksDB state store. Thus, even when the 
Materialization method provided is "in memory", it will use RocksDB 
under-the-hood for this state-store.

 

*Related problems:*
 * **IT Test: Having an implicit materialization method for state-store affects 
tests using foreign key state-stores. [On windows based systems 
|[https://stackoverflow.com/questions/50602512/failed-to-delete-the-state-directory-in-ide-for-kafka-stream-application]],
 that have the RocksDB filesystem removal problem, a solution to avoid the bug 
is to use in-memory state-stores (rather than exception swallowing). Having the 
RocksDB storage being forcely created makes that any IT test necessarily use 
the manual FS deletion with exception swallow hack.
 * Short lived Streams: Sometimes, Ktables are short lived in a way that 
neither Persistance storage nor changelogs are desired. The current 
implementation prevents this.

*Suggestion:*

One possible solution is to use the same materialization method that is 
provided in the argument when creating the intermediary Foreign Key 
state-store. If the Materialization is in memory and without changelog, the 
same happens in the state-sore. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] rhauch commented on pull request #285: MINOR: Fix the wrong doc path and missing zk/docker button in quickstart page

2020-08-10 Thread GitBox


rhauch commented on pull request #285:
URL: https://github.com/apache/kafka-site/pull/285#issuecomment-671539747


   https://kafka.apache.org/quickstart appears to be working again after 
merging #286.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] rhauch merged pull request #286: MINOR: Fix renamed and reformatted quickstart, broken since 2.6.0 release

2020-08-10 Thread GitBox


rhauch merged pull request #286:
URL: https://github.com/apache/kafka-site/pull/286


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-08-10 Thread Colin McCabe
Hi Jose,

That'a s good point that I hadn't considered.  It's probably worth having a 
separate leader change message, as you mentioned.

Hi Unmesh,

Thanks, I'll take a look.

best,
Colin


On Fri, Aug 7, 2020, at 11:56, Jose Garcia Sancio wrote:
> Hi Unmesh,
> 
> Very cool prototype!
> 
> Hi Colin,
> 
> The KIP proposes a record called IsrChange which includes the
> partition, topic, isr, leader and leader epoch. During normal
> operation ISR changes do not result in leader changes. Similarly,
> leader changes do not necessarily involve ISR changes. The controller
> implementation that uses ZK modeled them together because
> 1. All of this information is stored in one znode.
> 2. ZK's optimistic lock requires that you specify the new value completely
> 3. The change to that znode was being performed by both the controller
> and the leader.
> 
> None of these reasons are true in KIP-500. Have we considered having
> two different records? For example
> 
> 1. IsrChange record which includes topic, partition, isr
> 2. LeaderChange record which includes topic, partition, leader and leader 
> epoch.
> 
> I suspect that making this change will also require changing the
> message AlterIsrRequest introduced in KIP-497: Add inter-broker API to
> alter ISR.
> 
> Thanks
> -Jose
>


Re: [VOTE] KIP-635: GetOffsetShell: support for multiple topics and consumer configuration override

2020-08-10 Thread David Jacot
Hi Daniel,

I was not aware of that PR. At minimum, I would add `--bootstrap-server`
to the list in the KIP for completeness. Regarding the implementation,
I would leave a comment in that PR asking if they plan to continue it. If
not,
we could do it as part of your PR directly.

Cheers,
David

On Mon, Aug 10, 2020 at 10:49 AM Dániel Urbán  wrote:

> Hi everyone,
>
> Just a reminder, please vote if you are interested in this KIP being
> implemented.
>
> Thanks,
> Daniel
>
> Dániel Urbán  ezt írta (időpont: 2020. júl. 31., P,
> 9:01):
>
> > Hi David,
> >
> > There is another PR linked on KAFKA-8507, which is still open:
> > https://github.com/apache/kafka/pull/8123
> > Wasn't sure if it will go in, and wanted to avoid conflicts. Do you think
> > I should do the switch to '--bootstrap-server' anyway?
> >
> > Thanks,
> > Daniel
> >
> > David Jacot  ezt írta (időpont: 2020. júl. 30., Cs,
> > 17:52):
> >
> >> Hi Daniel,
> >>
> >> Thanks for the KIP.
> >>
> >> It seems that we have forgotten to include this tool in KIP-499.
> >> KAFKA-8507
> >> is resolved
> >> by this tool still uses the deprecated "--broker-list". I suggest to
> >> include "--bootstrap-server"
> >> in your public interfaces as well and fix this omission during the
> >> implementation.
> >>
> >> +1 (non-binding)
> >>
> >> Thanks,
> >> David
> >>
> >> On Thu, Jul 30, 2020 at 1:52 PM Kamal Chandraprakash <
> >> kamal.chandraprak...@gmail.com> wrote:
> >>
> >> > +1 (non-binding), thanks for the KIP!
> >> >
> >> > On Thu, Jul 30, 2020 at 3:31 PM Manikumar 
> >> > wrote:
> >> >
> >> > > +1 (binding)
> >> > >
> >> > > Thanks for the KIP!
> >> > >
> >> > >
> >> > >
> >> > > On Thu, Jul 30, 2020 at 3:07 PM Dániel Urbán  >
> >> > > wrote:
> >> > >
> >> > > > Hi everyone,
> >> > > >
> >> > > > If you are interested in this KIP, please do not forget to vote.
> >> > > >
> >> > > > Thanks,
> >> > > > Daniel
> >> > > >
> >> > > > Viktor Somogyi-Vass  ezt írta (időpont:
> >> 2020.
> >> > > > júl.
> >> > > > 28., K, 16:06):
> >> > > >
> >> > > > > +1 from me (non-binding), thanks for the KIP.
> >> > > > >
> >> > > > > On Mon, Jul 27, 2020 at 10:02 AM Dániel Urbán <
> >> urb.dani...@gmail.com
> >> > >
> >> > > > > wrote:
> >> > > > >
> >> > > > > > Hello everyone,
> >> > > > > >
> >> > > > > > I'd like to start a vote on KIP-635. The KIP enhances the
> >> > > > GetOffsetShell
> >> > > > > > tool by enabling querying multiple topic-partitions, adding
> new
> >> > > > filtering
> >> > > > > > options, and adding a config override option.
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
> >> > > > > >
> >> > > > > > The original discussion thread was named "[DISCUSS] KIP-308:
> >> > > > > > GetOffsetShell: new KafkaConsumer API, support for multiple
> >> topics,
> >> > > > > > minimize the number of requests to server". The id had to be
> >> > changed
> >> > > as
> >> > > > > > there was a collision, and the KIP also had to be renamed, as
> >> some
> >> > of
> >> > > > its
> >> > > > > > motivations were outdated.
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > > Daniel
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
>


[jira] [Resolved] (KAFKA-9659) Kafka Streams / Consumer configured for static membership fails on "fatal exception: group.instance.id gets fenced"

2020-08-10 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9659.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Kafka Streams / Consumer configured for static membership fails on "fatal 
> exception: group.instance.id gets fenced"
> ---
>
> Key: KAFKA-9659
> URL: https://issues.apache.org/jira/browse/KAFKA-9659
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Rohan Desai
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.6.0
>
> Attachments: ksql-1.logs
>
>
> I'm running a KSQL query, which underneath is built into a Kafka Streams 
> application. The application has been running without issue for a few days, 
> until today, when all the streams threads exited with: 
>  
>  
> {{[ERROR] 2020-03-05 00:57:58,776 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  org.apache.kafka.clients.consumer.internals.AbstractCoordinator handle - 
> [Consumer instanceId=ksql-1-2, 
> clientId=_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2-consumer,
>  groupId=_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5] 
> Received fatal exception: group.instance.id gets fenced}}
> {{[ERROR] 2020-03-05 00:57:58,776 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  org.apache.kafka.clients.consumer.internals.AbstractCoordinator onFailure - 
> [Consumer instanceId=ksql-1-2, 
> clientId=_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2-consumer,
>  groupId=_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5] 
> Caught fenced group.instance.id Optional[ksql-1-2] error in heartbeat thread}}
> {{[ERROR] 2020-03-05 00:57:58,776 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  org.apache.kafka.streams.processor.internals.StreamThread run - 
> stream-thread 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  Encountered the following unexpected Kafka exception during processing, this 
> usually indicate Streams internal errors:}}
>  \{{ org.apache.kafka.common.errors.FencedInstanceIdException: The broker 
> rejected this static consumer since another consumer with the same 
> group.instance.id has registered with a different member.id.}}{{[INFO] 
> 2020-03-05 00:57:58,776 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  org.apache.kafka.streams.processor.internals.StreamThread setState - 
> stream-thread 
> [_confluent-ksql-pksqlc-xm6g1query_CSAS_RATINGS_WITH_USER_AVERAGE_5-39e8046a-b6e6-44fd-8d6d-37cff78649bf-StreamThread-2]
>  State transition from RUNNING to PENDING_SHUTDOWN}}
>  
> I've attached the KSQL and Kafka Streams logs to this ticket. Here's a 
> summary for one of the streams threads (instance id `ksql-1-2`):
>  
> Around 00:56:36 the coordinator fails over from b11 to b2:
>  
> {{[INFO] 2020-03-05 00:56:36,258 
> [_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0-c1df9747-f353-47f1-82fd-30b97c20d038-StreamThread-2]
>  org.apache.kafka.clients.consumer.internals.AbstractCoordinator handle - 
> [Consumer instanceId=ksql-1-2, 
> clientId=_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0-c1df9747-f353-47f1-82fd-30b97c20d038-StreamThread-2-consumer,
>  groupId=_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0] Attempt to 
> heartbeat failed since coordinator 
> b11-pkc-lzxjz.us-west-2.aws.devel.cpdev.cloud:9092 (id: 2147483636 rack: 
> null) is either not started or not valid.}}
>  {{ [INFO] 2020-03-05 00:56:36,258 
> [_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0-c1df9747-f353-47f1-82fd-30b97c20d038-StreamThread-2]
>  org.apache.kafka.clients.consumer.internals.AbstractCoordinator 
> markCoordinatorUnknown - [Consumer instanceId=ksql-1-2, 
> clientId=_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0-c1df9747-f353-47f1-82fd-30b97c20d038-StreamThread-2-consumer,
>  groupId=_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0] Group 
> coordinator b11-pkc-lzxjz.us-west-2.aws.devel.cpdev.cloud:9092 (id: 
> 2147483636 rack: null) is unavailable or invalid, will attempt rediscovery}}
>  {{ [INFO] 2020-03-05 00:56:36,270 
> [_confluent-ksql-pksqlc-xm6g1query_CTAS_RATINGS_BY_USER_0-c1df9747-f353-47f1-82fd-30b97c20d038-StreamThread-2]
>  

Re: [DISCUSS] KIP-649: Dynamic Client Configuration

2020-08-10 Thread Ryan Dielhenn
Hi Jason,

I hope you're having a good start to your week! I made these changes

to
the KIP and they are reflected in the PR.

These changes include

1. Scoping dynamic configs by user-principal to address security issues.
2. Allowing default overrides only at the user level.
3. Changed enable.dynamic.config to default to false. This is so that the
user knows what applications support the capability since they had to set
the config on the client.
4. Updated the compatibility, deprecation, and migration plan to address
the issue of adding support for dynamic configs over time.

I am still working on changing the behavior of JoinGroup to avoid a
rebalance when updating the timeout.

Best,
Ryan Dielhenn





On Wed, Aug 5, 2020 at 5:23 PM Jason Gustafson  wrote:

> Hi Ryan,
>
> Thanks for the proposal. Just a few quick questions:
>
> 1. I wonder if we need to bother with `enable.dynamic.config`, especially
> if the default is going to be true anyway. I think users who don't want to
> use this capability can just not set dynamic configs. The only case I can
> see an explicit opt-out being useful is when users are trying to avoid
> getting affected by dynamic defaults. And on that note, is there a strong
> case for supporting default overrides? Many client configs are tied closely
> to application behavior, so it feels a bit dangerous to give users the
> ability to override the configuration for all applications.
>
> 2. Tying dynamic configurations to clientId has some downsides. It is
> common for users to use a different clientId for every application in a
> consumer group so that it is easier to tie group members back to where
> the client is running. This makes setting configurations at an application
> level cumbersome. The alternative is to use the default, but that means
> hitting /all/ applications which I think is probably not a good idea. A
> convenient alternative for consumers would be to use group.id, but we
> don't
> have anything similar for the producer. I am wondering if we need to give
> the clients a separate config label of some kind so that there is a
> convenient way to group configurations. For example `config.group`. Note
> that this would be another way to opt into dynamic config support.
>
> 3. I'm trying to understand the contract between brokers and clients to
> support dynamic configurations. I imagine that once this is available,
> users will have a hard time telling which applications support the
> capability and which do not. Also, we would likely add new dynamic config
> support over time which would make this even harder since we cannot
> retroactively change clients to add support for new dynamic configs. I'm
> wondering if there is anything we can do to make it easier for users to
> tell which dynamic configs are available for each application.
>
> 4. In the case of `session.timeout.ms`, even if the config is updated, the
> group will need to be rebalanced for it to take effect. This is because the
> session timeout is sent to the group coordinator in the JoinGroup request.
> I'm wondering if we need to change the JoinGroup behavior so that it can be
> used to update the session timeout without triggering a rebalance.
>
> Thanks,
> Jason
>
>
>
>
> On Mon, Aug 3, 2020 at 3:10 PM Ryan Dielhenn 
> wrote:
>
> > Hi David,
> >
> > Here are some additional thoughts...
> >
> > > 1. Once dynamic configs have been loaded and resolved, how can a client
> > > know what values are selected?
> >
> > A copy of the original user-provided configs is kept by the client.
> > Currently these are used to revert to the user-provided config if a
> dynamic
> > config is deleted. However, they can also be used to distinguish between
> > dynamic and user-provided configs.
> >
> > > 3. Are there other configs we'd like to allow the broker to push up to
> > the
> > > clients? Did we consider making this mechanism generic so the broker
> > could
> > > push any consumer/producer config up to the clients via dynamic
> configs?
> >
> > Rephrasing my answer to this question:
> >
> > The mechanism for sending and altering configs is rather generic.
> However,
> > the client-side handling of these configs is not. The reason for this is
> > that configs affect the behavior of the clients in specific ways, so the
> > client must reconfigure itself in a specific way for each different
> config.
> >
> > An example of this is that when session.timeout.ms is dynamically
> > configured, the consumer must rejoin the group by sending a
> > JoinGroupRequest. This is because the session timeout is sent in the
> > initial JoinGroupRequest to the coordinator and stored with the rest of
> the
> > group member's metadata. To reconfigure the client, the value in the
> > coordinator must also be changed. This does not need to be done for
> > heartbeat.interval.ms.
> >
> >
> > On 2020/08/03 17:47:19, David Arthur  wrote:
> > > Hey Ryan, 

Someone should remove nonexistent versions 2.0.2, 2.1.2 from https://kafka.apache.org/cve-list

2020-08-10 Thread Franklin Davis
https://kafka.apache.org/cve-list APACHE KAFKA SECURITY VULNERABILITIES 
incorrectly lists fixed versions 2.0.2 and 2.1.2, but those don't exist (e.g. 
in https://archive.apache.org/dist/kafka/). I'm not qualified to modify 
anything -- just letting you know in case someone can fix it.

--Franklin


[jira] [Created] (KAFKA-10382) MockProducer is not ThreadSafe, ideally it should be as the implementation it mocks is

2020-08-10 Thread Antony Stubbs (Jira)
Antony Stubbs created KAFKA-10382:
-

 Summary: MockProducer is not ThreadSafe, ideally it should be as 
the implementation it mocks is
 Key: KAFKA-10382
 URL: https://issues.apache.org/jira/browse/KAFKA-10382
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.6.0
Reporter: Antony Stubbs


In testing my project, I discovered that the MockProducer is not thread safe as 
I thought. It doesn't use thread safe libraries for it's underlying stores, and 
only _some_ of it’s methods are synchronised.

 

As performance isn’t an issue for this, I would propose simply synchronising 
all public methods in the class, as some already are.

 

In my project, send is synchronised and commit transactions isn’t. This was 
causing weird collection manipulation and messages going missing. My lolcat 
only solution was simply to synchronise on the MockProducer instance before 
calling commit.

 

See my workaround: 
https://github.com/astubbs/async-consumer/pull/13/files#diff-8e93aa2a2003be7436f94956cf809b2eR558

 

PR available: https://github.com/apache/kafka/pull/9154



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


GitBox emails for kafka-site going to dev@kafka.apache.org

2020-08-10 Thread Andrew Otto
Is this intentional?  If not, can someone disable this feature?  dev@ is
getting emails for every change to kafka-site on github.

Thank you!

On Mon, Aug 10, 2020 at 11:31 AM GitBox  wrote:

>
> rhauch opened a new pull request #286:
> URL: https://github.com/apache/kafka-site/pull/286
>
>
>The AK code contains a `quickstart.html` file, so that when 2.6.0 was
> released the newer `quickstart-docker.html` and `quickstart-zookeeper.html`
> were not included in the `26` directory, breaking the
> https://kafka.apache.org/quickstart page.
>
>This recovers those files into the `asf-site` branch's `26` directory.
>
>
> 
> This is an automated message from the Apache Git Service.
> To respond to the message, please log on to GitHub and use the
> URL above to go to the specific comment.
>
> For queries about this service, please contact Infrastructure at:
> us...@infra.apache.org
>
>
>


Re: [jira] [Created] (KAFKA-10314) KafkaStorageException on reassignment when offline log directories exist

2020-08-10 Thread Noa Resare
I guess it might be time to nag a bit about this, according to the contributing 
code changes  instructions :) I opened a 
pull request  (with test) 6 days ago 
that resolves this issue for me. I would be delighted to have a review or two 
of this tiny change.

cheers
noa

> On 27 Jul 2020, at 16:46, Noa Resare (Jira)  wrote:
> 
> Noa Resare created KAFKA-10314:
> --
> 
> Summary: KafkaStorageException on reassignment when offline log 
> directories exist
> Key: KAFKA-10314
> URL: https://issues.apache.org/jira/browse/KAFKA-10314
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Noa Resare
> 
> 
> If a reassignment of a partition is triggered to a broker with an offline 
> directory, the new broker will fail to follow, instead raising a 
> KafkaStorageException which causes the reassignment to stall indefinitely. 
> The error message we see is the following:
> 
> {{[2020-07-23 13:11:08,727] ERROR [Broker id=1] Skipped the become-follower 
> state change with correlation id 14 from controller 1 epoch 1 for partition 
> t2-0 (last update controller epoch 1) with leader 2 since the replica for the 
> partition is offline due to disk error 
> org.apache.kafka.common.errors.KafkaStorageException: Can not create log for 
> t2-0 because log directories /tmp/kafka/d1 are offline (state.change.logger)}}
> 
> It seems to me that unless the partition in question already existed on the 
> offline log partition, a better behaviour would simply be to assign the 
> partition to one of the available log directories.
> 
> The conditional in 
> [LogManager.scala:769|https://github.com/apache/kafka/blob/11f75691b87fcecc8b29bfd25c7067e054e408ea/core/src/main/scala/kafka/log/LogManager.scala#L769]
>  was introduced to prevent the issue in 
> [KAFKA-4763|https://issues.apache.org/jira/browse/KAFKA-4763] where 
> partitions in offline logdirs would be re-created in an online directory as 
> soon as a LeaderAndISR message gets processed. However, the semantics of 
> isNew seems different in LogManager (the replica is new on this broker) 
> compared to when isNew is set in 
> [KafkaController.scala|https://github.com/apache/kafka/blob/11f75691b87fcecc8b29bfd25c7067e054e408ea/core/src/main/scala/kafka/controller/KafkaController.scala#L879]
>  (where it seems to refer to whether the topic partition in itself is new, 
> all followers gets {{isNew=false}})
> 
> 
> 
> --
> This message was sent by Atlassian Jira
> (v8.3.4#803005)



[GitHub] [kafka-site] scott-confluent opened a new pull request #287: fixes for intro page and various docs page headings

2020-08-10 Thread GitBox


scott-confluent opened a new pull request #287:
URL: https://github.com/apache/kafka-site/pull/287


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10381) Add broker to a cluster not rebalancing partitions

2020-08-10 Thread Yogesh BG (Jira)
Yogesh BG created KAFKA-10381:
-

 Summary: Add broker to a cluster not rebalancing partitions
 Key: KAFKA-10381
 URL: https://issues.apache.org/jira/browse/KAFKA-10381
 Project: Kafka
  Issue Type: Bug
Reporter: Yogesh BG


Hi

I have 3 node cluster, topic with one partition. when a node is deleted and add 
another node. Topic goes on unknown state and not able to write/read anything, 
below exception is seen

 
{code:java}
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1003,1004 for partition A-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1003,1004 for partition C-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1002,1004 for partition A-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1002,1004 for partition B-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1003,1004 for partition A-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
[2020-08-10 00:00:00,108] WARN [ReplicaManager broker=1004] Leader 1004 failed 
to record follower 1005's position 0 since the replica is not recognized to be 
one of the assigned replicas 1003,1004 for partition A-0. Empty records will be 
returned for this partition. (kafka.server.ReplicaManager)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] rhauch closed pull request #285: MINOR: Fix the wrong doc path and missing zk/docker button in quickstart page

2020-08-10 Thread GitBox


rhauch closed pull request #285:
URL: https://github.com/apache/kafka-site/pull/285


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] rhauch commented on pull request #285: MINOR: Fix the wrong doc path and missing zk/docker button in quickstart page

2020-08-10 Thread GitBox


rhauch commented on pull request #285:
URL: https://github.com/apache/kafka-site/pull/285#issuecomment-671426816


   Thanks for identifying the problem and proposing a fix, @showuon. However, I 
don't think referencing the quickstarts in `25` directory is what we want -- 
instead, we want to add the quickstarts to the `26` directory.
   
   I've created #286 to address the issue more consistently with the existing 
patterns.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] rhauch commented on pull request #285: MINOR: Fix the wrong doc path and missing zk/docker button in quickstart page

2020-08-10 Thread GitBox


rhauch commented on pull request #285:
URL: https://github.com/apache/kafka-site/pull/285#issuecomment-671426983


   Close without merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] rhauch opened a new pull request #286: Fix renamed and reformatted quickstart, broken since 2.6.0 release

2020-08-10 Thread GitBox


rhauch opened a new pull request #286:
URL: https://github.com/apache/kafka-site/pull/286


   The AK code contains a `quickstart.html` file, so that when 2.6.0 was 
released the newer `quickstart-docker.html` and `quickstart-zookeeper.html` 
were not included in the `26` directory, breaking the 
https://kafka.apache.org/quickstart page.
   
   This recovers those files into the `asf-site` branch's `26` directory.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: New Website Layout

2020-08-10 Thread Ben Stopford
Good spot. Thanks.

On Thu, 6 Aug 2020 at 18:59, Ben Weintraub  wrote:

> Plus one to Tom's request - the ability to easily generate links to
> specific config options is extremely valuable.
>
> On Thu, Aug 6, 2020 at 10:09 AM Tom Bentley  wrote:
>
> > Hi Ben,
> >
> > The documentation for the configs (broker, producer etc) used to function
> > as links as well as anchors, which made the url fragments more
> > discoverable, because you could click on the link and then copy+paste the
> > browser URL:
> >
> > 
> >> href="#batch.size">batch.size
> > 
> >
> > What seems to have happened with the new layout is the  tags are
> empty,
> > and no longer enclose the config name,
> >
> > 
> >   
> >   batch.size
> > 
> >
> > meaning you can't click on the link to copy and paste the URL. Could the
> > old behaviour be restored?
> >
> > Thanks,
> >
> > Tom
> >
> > On Wed, Aug 5, 2020 at 12:43 PM Luke Chen  wrote:
> >
> > > When entering streams doc, it'll always show:
> > > *You're viewing documentation for an older version of Kafka - check out
> > our
> > > current documentation here.*
> > >
> > >
> > >
> > > On Wed, Aug 5, 2020 at 6:44 PM Ben Stopford  wrote:
> > >
> > > > Thanks for the PR and feedback Michael. Appreciated.
> > > >
> > > > On Wed, 5 Aug 2020 at 10:49, Mickael Maison <
> mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thank you, it looks great!
> > > > >
> > > > > I found a couple of small issues:
> > > > > - It's not rendering correctly with http.
> > > > > - It's printing "called" to the console. I opened a PR to remove
> the
> > > > > console.log() call: https://github.com/apache/kafka-site/pull/278
> > > > >
> > > > > On Wed, Aug 5, 2020 at 9:45 AM Ben Stopford 
> > wrote:
> > > > > >
> > > > > > The new website layout has gone live as you may have seen. There
> > are
> > > a
> > > > > > couple of rendering issues in the streams developer guide that
> > we're
> > > > > > getting addressed. If anyone spots anything else could they
> please
> > > > reply
> > > > > to
> > > > > > this thread.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Ben
> > > > > >
> > > > > > On Fri, 26 Jun 2020 at 11:48, Ben Stopford 
> > wrote:
> > > > > >
> > > > > > > Hey folks
> > > > > > >
> > > > > > > We've made some updates to the website's look and feel. There
> is
> > a
> > > > > staged
> > > > > > > version in the link below.
> > > > > > >
> > > > > > > https://ec2-13-57-18-236.us-west-1.compute.amazonaws.com/
> > > > > > > username: kafka
> > > > > > > password: streaming
> > > > > > >
> > > > > > > Comments welcomed.
> > > > > > >
> > > > > > > Ben
> > > > > > >
> > > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > >
> > > > Ben Stopford
> > > >
> > > > Lead Technologist, Office of the CTO
> > > >
> > > > 
> > > >
> > >
> >
>


-- 

Ben Stopford

Lead Technologist, Office of the CTO




Re: [VOTE] KIP-651 - Support PEM format for SSL certificates and private key

2020-08-10 Thread Rajini Sivaram
The vote has passed with 4 binding votes (Gwen, Manikumar, Harsha, me) and
3 non-binding votes (Ron, Maulin, David). Thanks to everyone who voted!

I will update the KIP and submit a PR.

Regards,

Rajini


On Sun, Aug 9, 2020 at 3:37 AM Harsha Ch  wrote:

> +1 binding.
>
> Thanks,
> Harsha
>
> On Sat, Aug 8, 2020 at 2:07 AM Manikumar 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP.
> >
> >
> > On Fri, Aug 7, 2020 at 12:56 AM David Jacot  wrote:
> >
> > > Supporting PEM is really nice. Thanks, Rajini.
> > >
> > > +1 (non-binding)
> > >
> > > On Thu, Aug 6, 2020 at 9:18 PM Gwen Shapira  wrote:
> > >
> > > > +1 (binding)
> > > > Thank you for driving this, Rajini
> > > >
> > > > On Thu, Aug 6, 2020 at 10:43 AM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start vote on KIP-651 to support SSL key stores and
> > > trust
> > > > > stores in PEM format:
> > > > >
> > > > >-
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-651+-+Support+PEM+format+for+SSL+certificates+and+private+key
> > > > >
> > > > >
> > > > > Thank you...
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > >
> >
>


Re: [VOTE] KIP-635: GetOffsetShell: support for multiple topics and consumer configuration override

2020-08-10 Thread Dániel Urbán
Hi everyone,

Just a reminder, please vote if you are interested in this KIP being
implemented.

Thanks,
Daniel

Dániel Urbán  ezt írta (időpont: 2020. júl. 31., P,
9:01):

> Hi David,
>
> There is another PR linked on KAFKA-8507, which is still open:
> https://github.com/apache/kafka/pull/8123
> Wasn't sure if it will go in, and wanted to avoid conflicts. Do you think
> I should do the switch to '--bootstrap-server' anyway?
>
> Thanks,
> Daniel
>
> David Jacot  ezt írta (időpont: 2020. júl. 30., Cs,
> 17:52):
>
>> Hi Daniel,
>>
>> Thanks for the KIP.
>>
>> It seems that we have forgotten to include this tool in KIP-499.
>> KAFKA-8507
>> is resolved
>> by this tool still uses the deprecated "--broker-list". I suggest to
>> include "--bootstrap-server"
>> in your public interfaces as well and fix this omission during the
>> implementation.
>>
>> +1 (non-binding)
>>
>> Thanks,
>> David
>>
>> On Thu, Jul 30, 2020 at 1:52 PM Kamal Chandraprakash <
>> kamal.chandraprak...@gmail.com> wrote:
>>
>> > +1 (non-binding), thanks for the KIP!
>> >
>> > On Thu, Jul 30, 2020 at 3:31 PM Manikumar 
>> > wrote:
>> >
>> > > +1 (binding)
>> > >
>> > > Thanks for the KIP!
>> > >
>> > >
>> > >
>> > > On Thu, Jul 30, 2020 at 3:07 PM Dániel Urbán 
>> > > wrote:
>> > >
>> > > > Hi everyone,
>> > > >
>> > > > If you are interested in this KIP, please do not forget to vote.
>> > > >
>> > > > Thanks,
>> > > > Daniel
>> > > >
>> > > > Viktor Somogyi-Vass  ezt írta (időpont:
>> 2020.
>> > > > júl.
>> > > > 28., K, 16:06):
>> > > >
>> > > > > +1 from me (non-binding), thanks for the KIP.
>> > > > >
>> > > > > On Mon, Jul 27, 2020 at 10:02 AM Dániel Urbán <
>> urb.dani...@gmail.com
>> > >
>> > > > > wrote:
>> > > > >
>> > > > > > Hello everyone,
>> > > > > >
>> > > > > > I'd like to start a vote on KIP-635. The KIP enhances the
>> > > > GetOffsetShell
>> > > > > > tool by enabling querying multiple topic-partitions, adding new
>> > > > filtering
>> > > > > > options, and adding a config override option.
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
>> > > > > >
>> > > > > > The original discussion thread was named "[DISCUSS] KIP-308:
>> > > > > > GetOffsetShell: new KafkaConsumer API, support for multiple
>> topics,
>> > > > > > minimize the number of requests to server". The id had to be
>> > changed
>> > > as
>> > > > > > there was a collision, and the KIP also had to be renamed, as
>> some
>> > of
>> > > > its
>> > > > > > motivations were outdated.
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Daniel
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


[jira] [Resolved] (KAFKA-10377) Delete Useless Code

2020-08-10 Thread Bingkun.ji (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bingkun.ji resolved KAFKA-10377.

Resolution: Not A Problem

> Delete Useless Code
> ---
>
> Key: KAFKA-10377
> URL: https://issues.apache.org/jira/browse/KAFKA-10377
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.6.0
>Reporter: Bingkun.ji
>Priority: Trivial
> Attachments: image-2020-08-10-00-13-28-744.png
>
>
> delete useless code for client
>  
> !image-2020-08-10-00-13-28-744.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #348

2020-08-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10261: Introduce the KIP-478 apis with adapters (#9004)


--
[...truncated 6.43 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest >