KIP 678: New Kafka Connect SMT for plainText => Struct(or Map) with Regex

2020-10-22 Thread gyejun choi
Hi,

I've opened KIP-678 which is intended to provide a new SMT in Kafka Connect.
I'd be grateful for any
feedback:https://cwiki.apache.org/confluence/display/KAFKA/KIP+678%3A+New+Kafka+Connect+SMT+for+plainText+%3D%3E+Struct%28or+Map%29+with+Regex==

thanks,

whsoul


Re: [DISCUSS] Checkstyle Import Order

2020-10-22 Thread Dongjin Lee
As of Present:

Committers:

- Bruno: 2 and 3.
- Gwen: (No Specific Preference)

Non-Committers:

- Brandon: 2.
- Dongjin: 2 and 3.

Let's hold on for 2 or 3 committers.

Best,
Dongjin

On Fri, Oct 23, 2020 at 10:09 AM Gwen Shapira  wrote:

> I don't have any specific preference on the style. But I am glad you
> are bringing it up. Every other project I worked on had a specific
> import style, and the random import changes in PRs are pretty
> annoying.
>
> On Wed, Oct 14, 2020 at 10:36 PM Dongjin Lee  wrote:
> >
> > Hello. I hope to open a discussion about the import order in Java code.
> >
> > As Nikolay stated recently[^1], Kafka uses a relatively strict code style
> > for Java code. However, it misses any rule on import order. For this
> > reason, the code formatting settings of every local dev environment are
> > different from person to person, resulting in the countless meaningless
> > import order changes in the PR.
> >
> > For example, in `NamedCache.java` in the streams module, the `java.*`
> > imports are split into two chunks, embracing the other imports between
> > them. So, I propose to define an import order to prevent these kinds of
> > cases in the future.
> >
> > To define the import order, we have to regard the following three
> > orthogonal issues beforehand:
> >
> > a. How to group the type imports?
> > b. Whether to sort the imports alphabetically?
> > c. Where to place static imports: above the type imports, or below them.
> >
> > Since b and c seem relatively straightforward (sort the imports
> > alphabetically and place the static imports below the type imports), I
> hope
> > to focus the available alternatives on the problem a.
> >
> > I evaluated the following alternatives and checked how many files are get
> > effected for each case. (based on commit 1457cc652) And here are the
> > results:
> >
> > *1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*
> >
> > ```
> > 
> >> value="kafka,/^org\.apache\.kafka.*$/,*,javax,java"/>
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > *2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*
> >
> > ```
> > 
> >value="(kafka|org\.apache\.kafka),*,javax?"/>
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > *3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*
> >
> > ```
> > 
> >> value="(kafka|org\.apache\.kafka),*,javax,java"/>
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > *4. *, javax? (2 groups): 707 files.*
> >
> > ```
> > 
> >   
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > *5. javax?, * (2 groups): 1822 files.*
> >
> > ```
> > 
> >   
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > *6. java, javax, * (3 groups): 1809 files.*
> >
> > ```
> > 
> >   
> >   
> >   
> >   
> >   
> > 
> > ```
> >
> > I hope to get some feedback on this issue here.
> >
> > For the WIP PR, please refer here:
> https://github.com/apache/kafka/pull/8404
> >
> > Best,
> > Dongjin
> >
> > [^1]:
> >
> https://lists.apache.org/thread.html/r2bbee24b8a459842a0fc840c6e40958e7158d29f3f2d6c0d223be80b%40%3Cdev.kafka.apache.org%3E
> > [^2]:
> >
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/NamedCache.java
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> >
> >
> >
> > *github:  github.com/dongjinleekr
> > keybase:
> https://keybase.io/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> speakerdeck.com/dongjin
> > *
>
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #200

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update docs to point to next release add notable features for 
2.7 (#9483)

[github] MINOR: Update raft/README.md and minor RaftConfig tweaks (#9484)

[github] MINOR: Refactor RaftClientTest to be used by other tests (#9476)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3e3375c0, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3e3375c0, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1279c792, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1279c792, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@bff0d3d, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@bff0d3d, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@14ed0645, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@14ed0645, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3155ec6d, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3155ec6d, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6d4cf882, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6d4cf882, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1e2986f3, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1e2986f3, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@628a645d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@628a645d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7c22cae5, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7c22cae5, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #35

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[Bill Bejeck] MINOR: Add notable changes for 2.7 release


--
[...truncated 3.43 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #199

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] Handle ProducerFencedException on offset commit (#9479)


--
[...truncated 6.90 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #34

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] Handle ProducerFencedException on offset commit (#9479)


--
[...truncated 6.87 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] Checkstyle Import Order

2020-10-22 Thread Gwen Shapira
I don't have any specific preference on the style. But I am glad you
are bringing it up. Every other project I worked on had a specific
import style, and the random import changes in PRs are pretty
annoying.

On Wed, Oct 14, 2020 at 10:36 PM Dongjin Lee  wrote:
>
> Hello. I hope to open a discussion about the import order in Java code.
>
> As Nikolay stated recently[^1], Kafka uses a relatively strict code style
> for Java code. However, it misses any rule on import order. For this
> reason, the code formatting settings of every local dev environment are
> different from person to person, resulting in the countless meaningless
> import order changes in the PR.
>
> For example, in `NamedCache.java` in the streams module, the `java.*`
> imports are split into two chunks, embracing the other imports between
> them. So, I propose to define an import order to prevent these kinds of
> cases in the future.
>
> To define the import order, we have to regard the following three
> orthogonal issues beforehand:
>
> a. How to group the type imports?
> b. Whether to sort the imports alphabetically?
> c. Where to place static imports: above the type imports, or below them.
>
> Since b and c seem relatively straightforward (sort the imports
> alphabetically and place the static imports below the type imports), I hope
> to focus the available alternatives on the problem a.
>
> I evaluated the following alternatives and checked how many files are get
> effected for each case. (based on commit 1457cc652) And here are the
> results:
>
> *1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*
>
> ```
> 
>value="kafka,/^org\.apache\.kafka.*$/,*,javax,java"/>
>   
>   
>   
>   
> 
> ```
>
> *2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*
>
> ```
> 
>   
>   
>   
>   
>   
> 
> ```
>
> *3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*
>
> ```
> 
>value="(kafka|org\.apache\.kafka),*,javax,java"/>
>   
>   
>   
>   
> 
> ```
>
> *4. *, javax? (2 groups): 707 files.*
>
> ```
> 
>   
>   
>   
>   
>   
> 
> ```
>
> *5. javax?, * (2 groups): 1822 files.*
>
> ```
> 
>   
>   
>   
>   
>   
> 
> ```
>
> *6. java, javax, * (3 groups): 1809 files.*
>
> ```
> 
>   
>   
>   
>   
>   
> 
> ```
>
> I hope to get some feedback on this issue here.
>
> For the WIP PR, please refer here: https://github.com/apache/kafka/pull/8404
>
> Best,
> Dongjin
>
> [^1]:
> https://lists.apache.org/thread.html/r2bbee24b8a459842a0fc840c6e40958e7158d29f3f2d6c0d223be80b%40%3Cdev.kafka.apache.org%3E
> [^2]:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/NamedCache.java
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #165

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] Handle ProducerFencedException on offset commit (#9479)


--
[...truncated 6.84 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #174

2020-10-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10636) Bypass log validation for writes to raft log

2020-10-22 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10636:
---

 Summary: Bypass log validation for writes to raft log
 Key: KAFKA-10636
 URL: https://issues.apache.org/jira/browse/KAFKA-10636
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


The raft leader is responsible for creating the records written to the log 
(including assigning offsets and the epoch), so we can consider bypassing the 
validation done in `LogValidator`. This lets us skip potentially expensive 
decompression and the unnecessary recomputation of the CRC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-2.3-jdk8 #11

2020-10-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10635) Streams application fails with OutOfOrderSequenceException after rolling restarts of brokers

2020-10-22 Thread Peeraya Maetasatidsuk (Jira)
Peeraya Maetasatidsuk created KAFKA-10635:
-

 Summary: Streams application fails with 
OutOfOrderSequenceException after rolling restarts of brokers
 Key: KAFKA-10635
 URL: https://issues.apache.org/jira/browse/KAFKA-10635
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.1
Reporter: Peeraya Maetasatidsuk


We are upgrading our brokers to version 2.5.1 (from 2.3.1) by performing a 
rolling restart of the brokers after installing the new version. After the 
restarts we notice one of our streams app (client version 2.4.1) fails with 
OutOfOrderSequenceException:

 
{code:java}
ERROR [2020-10-13 22:52:21,400] [com.aaa.bbb.ExceptionHandler] Unexpected 
error. Record: a_record, destination topic: topic-name-Aggregation-repartition 
org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker received 
an out of order sequence number.
ERROR [2020-10-13 22:52:21,413] 
[org.apache.kafka.streams.processor.internals.AssignedTasks] stream-thread 
[topic-name-StreamThread-1] Failed to commit stream task 1_39 due to the 
following error: org.apache.kafka.streams.errors.StreamsException: task [1_39] 
Abort sending since an error caught with a previous record (timestamp 
1602654659000) to topic topic-name-Aggregation-repartition due to 
org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker received 
an out of order sequence number.at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:144)
at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:52)
at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:204)
at 
org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1348)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:230)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196)
at 
org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:730)   
 at 
org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:716)   
 at 
org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:674)
at 
org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:596)
at 
org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74)   
 at 
org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:798)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) 
   at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:569)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)  
  at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
   at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244)   
 at java.base/java.lang.Thread.run(Thread.java:834)Caused by: 
org.apache.kafka.common.errors.OutOfOrderSequenceException: The broker received 
an out of order sequence number.
{code}
We see a corresponding error on the broker side:
{code:java}
[2020-10-13 22:52:21,398] ERROR [ReplicaManager broker=137636348] Error 
processing append operation on partition topic-name-Aggregation-repartition-52  
(kafka.server.ReplicaManager)org.apache.kafka.common.errors.OutOfOrderSequenceException:
 Out of order sequence number for producerId 2819098 at offset 1156041 in 
partition topic-name-Aggregation-repartition-52: 29 (incoming seq. number), -1 
(current end sequence number)
{code}
We are able to reproduce this many times and it happens regardless of whether 
the broker shutdown (at restart) is clean or unclean. However, when we rollback 
the broker version to 2.3.1 from 2.5.1 and perform similar rolling restarts, we 
don't see this error on the streams application at all. This is blocking us 
from upgrading our broker version. 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #198

2020-10-22 Thread wang120445143


Build failed in Jenkins: Kafka » kafka-2.2-jdk8 #6

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add Jenkinsfile to 2.2 (#9474)


--
[...truncated 3.26 MB...]

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod STARTED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithNoOffsetReset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithNoOffsetReset PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog STARTED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
PASSED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType STARTED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType PASSED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator STARTED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartString STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartString PASSED

kafka.security.auth.ResourceTest > shouldRoundTripViaString STARTED

kafka.security.auth.ResourceTest > shouldRoundTripViaString PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #198

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10284: Group membership update due to static member rejoin 
should be persisted (#9270)


--
[...truncated 6.90 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #173

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10284: Group membership update due to static member rejoin 
should be persisted (#9270)


--
[...truncated 3.44 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3091449, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3091449, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58c6b7cf, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58c6b7cf, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15bfc151, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@15bfc151, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@300bf52, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@300bf52, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48605827, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48605827, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25f5e62b, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25f5e62b, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4eab0e8d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4eab0e8d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48d7d225, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48d7d225, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@23c0a78, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@23c0a78, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@8f260aa, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@8f260aa, 
timestamped = false, caching = true, logging 

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #33

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-10284: Group membership update due to static member rejoin 
should be persisted (#9270)

[Bill Bejeck] Update docs/documentation.html to point to 2.7 release and add 
links for 2.6.x documentation


--
[...truncated 3.43 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #164

2020-10-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10634) LeaderChangeMessage should include the leader as one of the voters

2020-10-22 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-10634:
--

 Summary: LeaderChangeMessage should include the leader as one of 
the voters
 Key: KAFKA-10634
 URL: https://issues.apache.org/jira/browse/KAFKA-10634
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jose Armando Garcia Sancio


When a leader is elect the leader writes a `LeaderChangeMessage` to the 
replicated log. This message is defined in KIP-595 as:

```
{{{}}
{{  }}{{"type"}}{{: }}{{"data"}}{{,}}
{{  }}{{"name"}}{{: }}{{"LeaderChangeMessage"}}{{,}}
{{  }}{{"validVersions"}}{{: }}{{"0"}}{{,}}
{{  }}{{"flexibleVersions"}}{{: }}{{"0+"}}{{,}}
{{  }}{{"fields"}}{{: [}}
{{  }}{{{}}{{"name"}}{{: }}{{"LeaderId"}}{{, }}{{"type"}}{{: 
}}{{"int32"}}{{, }}{{"versions"}}{{: }}{{"0+"}}{{,}}
{{   }}{{"about"}}{{: }}{{"The ID of the newly elected leader"}}{{},}}
{{  }}{{{}}{{"name"}}{{: }}{{"VotedIds"}}{{, }}{{"type"}}{{: 
}}{{"[]int32"}}{{, }}{{"versions"}}{{: }}{{"0+"}}{{,}}
{{   }}{{"about"}}{{: }}{{"The IDs of the voters who voted for the current 
leader"}}{{},}}
 
{{  }}{{]}}
{{}}}
```

The current implementation doesn't include the LeaderId in the set of VoterIds. 
In the protocol it is guarantee that the leader must have voted for itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Requesting Permission to submit Kafka Improvement Proposals

2020-10-22 Thread Matthias J. Sax
What is your username?

On 10/22/20 2:23 AM, Aron Tigor Möllers wrote:
> Hello Kafka Devs,
> 
> I hereby kindly request permission to submit KIPs in the Wiki as described
> in
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> 
> Thank you and kind regards
> 
> Aron Tigor Moellers
> 


Re: Access request

2020-10-22 Thread Matthias J. Sax
Done.

On 10/22/20 12:12 PM, Brad Davis wrote:
> Ugh sorry, my Wiki username is "jherico"
> 
> On Thu, Oct 22, 2020 at 12:11 PM Brad Davis  wrote:
> 
>> Requesting confluence wiki permission so I can create a KIP for a small
>> feature add for Kafka Connect.
>>
>> Thanks,
>> Brad
>>
> 


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #41

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-10284: Group membership update due to static member rejoin 
should be persisted (#9270)


--
[...truncated 6.32 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 

Access request

2020-10-22 Thread Brad Davis
Requesting confluence wiki permission so I can create a KIP for a small
feature add for Kafka Connect.

Thanks,
Brad


Re: Access request

2020-10-22 Thread Brad Davis
Ugh sorry, my Wiki username is "jherico"

On Thu, Oct 22, 2020 at 12:11 PM Brad Davis  wrote:

> Requesting confluence wiki permission so I can create a KIP for a small
> feature add for Kafka Connect.
>
> Thanks,
> Brad
>


[jira] [Created] (KAFKA-10633) Constant probing rebalances in Streams 2.6

2020-10-22 Thread Bradley Peterson (Jira)
Bradley Peterson created KAFKA-10633:


 Summary: Constant probing rebalances in Streams 2.6
 Key: KAFKA-10633
 URL: https://issues.apache.org/jira/browse/KAFKA-10633
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Bradley Peterson
 Attachments: Discover 2020-10-21T23 34 03.867Z - 2020-10-21T23 44 
46.409Z.csv

We are seeing a few issues with the new rebalancing behavior in Streams 2.6. 
This ticket is for constant probing rebalances on one StreamThread, but I'll 
mention the other issues, as they may be related.

First, when we redeploy the application we see tasks being moved, even though 
the task assignment was stable before redeploying. We would expect to see tasks 
assigned back to the same instances and no movement. The application is in EC2, 
with persistent EBS volumes, and we use static group membership to avoid 
rebalancing. To redeploy the app we terminate all EC2 instances. The new 
instances will reattach the EBS volumes and use the same group member id.

After redeploying, we sometimes see the group leader go into a tight probing 
rebalance loop. This doesn't happen immediately, it could be several hours 
later. Because the redeploy caused task movement, we see expected probing 
rebalances every 10 minutes. But, then one thread will go into a tight loop 
logging messages like "Triggering the followup rebalance scheduled for 
1603323868771 ms.", handling the partition assignment (which doesn't change), 
then "Requested to schedule probing rebalance for 1603323868771 ms." This 
repeats several times a second until the app is restarted again. I'll attach a 
log export from one such incident.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-22 Thread Bruno Cadonna

Hi Bill,

I took a second look at the git history and now it actually seems to be 
a regression. Probably, a change in August that extended the error codes 
introduced this bug.


Best,
Bruno

On 22.10.20 19:50, Bill Bejeck wrote:

Hi Bruno,

While technically it's not a regression, I think this is an important fix
with a low-risk to include, so we can leave it as a blocker.

Thanks,
Bill


On Thu, Oct 22, 2020 at 1:25 PM Bruno Cadonna  wrote:


Hi Bill,

we encountered the following bug in our soak testing cluster.

https://issues.apache.org/jira/browse/KAFKA-10631

I classified the bug as a blocker because it caused the death of a
stream thread. It does not seem to be a regression, though.

I opened a PR to fix the bug here:

https://github.com/apache/kafka/pull/9479

Feel free to downgrade the priority to "Major" if you think it is not a
blocker.

Best,
Bruno

On 22.10.20 17:49, Bill Bejeck wrote:

Hi All,

We've hit code freeze.  The current status for cutting an RC is there is
one blocker issue.  It looks like there is a fix in the works, so
hopefully, it will get merged early next week.

At that point, if there are no other blockers, I proceed with the RC
process.

Thanks,
Bill

On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck  wrote:


Hi Anna,

I've updated the table to only show KAFKA-10023 as going into 2.7

Thanks,
Bill

On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner  wrote:


Hi Bill,

Regarding KIP-612, only the first half of the KIP will get into 2.7
release: Broker-wide and per-listener connection rate limits, including
corresponding configs and metric (KAFKA-10023). I see that the table in
the
release plan tags KAFKA-10023 as "old", not sure what it refers to.

Note

that while KIP-612 was approved prior to 2.6 release, none of the
implementation went into 2.6 release.

The second half of the KIP that adds per-IP connection rate limiting

will

need to be postponed (KAFKA-10024) till the following release.

Thanks,
Anna

On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:


Hi Kowshik,

Given that the new feature is contained in the PR and the tooling is
follow-on work (minor work, but that's part of the submitted PR), I

think

this is fine.

Thanks,
BIll

On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam <

kpraka...@confluent.io


wrote:


Hey Bill,

For KIP-584 , we are

in

the

process of reviewing/merging the write path PR into AK trunk:
https://github.com/apache/kafka/pull/9001 . As far as the KIP goes,

this

PR
is a major milestone. The PR merge will hopefully be done before EOD
tomorrow in time for the feature freeze. Beyond this PR, couple

things

are

left to be completed for this KIP: (1) tooling support and (2)

implementing

support for feature version deprecation in the broker . In

particular,

(1)

is important for this KIP and the code changes are external to the

broker

(since it is a separate tool we intend to build). As of now, we won't

be

able to merge the tooling changes before feature freeze date. Would

it be

ok to merge the tooling changes before code freeze on 10/22? The

tooling

requirements are explained here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584








%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport


I would love to hear thoughts from Boyang and Jun as well.


Thanks,
Kowshik



On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck 

wrote:



Hi John,

I've updated the list of expected KIPs for 2.7.0 with KIP-478.

Thanks,
Bill

On Mon, Oct 5, 2020 at 11:26 AM John Roesler 

wrote:



Hi Bill,

Sorry about this, but I've just noticed that KIP-478 is
missing from the list. The url is:











https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API


The KIP was accepted a long time ago, and the implementation
has been trickling in since 2.6 branch cut. However, most of
the public API implementation is done now, so I think at
this point, we can call it "released in 2.7.0". I'll make
sure it's done by feature freeze.

Thanks,
-John

On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:

All,

With the KIP acceptance deadline passing yesterday, I've updated

the

planned KIP content section of the 2.7.0 release plan
<











https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629


.

Removed proposed KIPs for 2.7.0 not getting approval

 1. KIP-653
 <











https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2


 2. KIP-608
 <











https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer


 3. KIP-508
 <











https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable



KIPs added

 1. KIP-671
 <












[jira] [Resolved] (KAFKA-9999) Internal topic creation failure should be non-fatal and trigger explicit rebalance

2020-10-22 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-.

Resolution: Won't Fix

> Internal topic creation failure should be non-fatal and trigger explicit 
> rebalance 
> ---
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, streams
>Affects Versions: 2.4.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> We spotted a case in system test failure where the topic already exists but 
> the admin client still attempts to recreate it:
>  
> {code:java}
> [2020-05-14 09:56:40,120] INFO stream-thread [main] Could not create topic 
> SmokeTest-KSTREAM-REDUCE-STATE-STORE-20-changelog. Topic is probably 
> marked for deletion (number of partitions is unknown).
> Will retry to create this topic in 100 ms (to let broker finish async delete 
> operation first).
> Error message was: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'SmokeTest-KSTREAM-REDUCE-STATE-STORE-20-changelog' already exists. 
> (org.apache.kafka.streams.processor.internals.InternalTopicManager)
> [2020-05-14 09:56:40,120] INFO stream-thread [main] Could not create topic 
> SmokeTest-uwin-cnt-changelog. Topic is probably marked for deletion (number 
> of partitions is unknown).
> Will retry to create this topic in 100 ms (to let broker finish async delete 
> operation first).
> Error message was: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'SmokeTest-uwin-cnt-changelog' already exists. 
> (org.apache.kafka.streams.processor.internals.InternalTopicManager) 
> [2020-05-14 09:56:40,120] INFO stream-thread [main] Could not create topic 
> SmokeTest-cntByCnt-changelog. Topic is probably marked for deletion (number 
> of partitions is unknown).
> Will retry to create this topic in 100 ms (to let broker finish async delete 
> operation first).
> Error message was: org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'SmokeTest-cntByCnt-changelog' already exists. 
> (org.apache.kafka.streams.processor.internals.InternalTopicManager)
> [2020-05-14 09:56:40,120] INFO stream-thread [main] Topics 
> [SmokeTest-KSTREAM-REDUCE-STATE-STORE-20-changelog, 
> SmokeTest-uwin-cnt-changelog, SmokeTest-cntByCnt-changelog] can not be made 
> ready with 5 retries left 
> (org.apache.kafka.streams.processor.internals.InternalTopicManager)
> [2020-05-14 09:56:40,220] ERROR stream-thread [main] Could not create topics 
> after 5 retries. This can happen if the Kafka cluster is temporary not 
> available. You can increase admin client config `retries` to be resilient 
> against this error. 
> (org.apache.kafka.streams.processor.internals.InternalTopicManager)
> [2020-05-14 09:56:40,221] ERROR stream-thread 
> [SmokeTest-05374457-074b-4d33-bca0-8686465e8157-StreamThread-2] Encountered 
> the following unexpected Kafka exception during processing, this usually 
> indicate Streams internal errors: 
> (org.apache.kafka.streams.processor.internals.StreamThread)
> org.apache.kafka.streams.errors.StreamsException: Could not create topics 
> after 5 retries. This can happen if the Kafka cluster is temporary not 
> available. You can increase admin client config `retries` to be resilient 
> against this error.
>         at 
> org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:171)
>         at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.prepareTopic(StreamsPartitionAssignor.java:1229)
>         at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:588)
>  
>         at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:548)
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:650)
>  
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1300(AbstractCoordinator.java:111)
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:572)
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:555)
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1026)
>         at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1006)
>         at 
> 

[jira] [Created] (KAFKA-10632) Raft client should push all committed data to listeners

2020-10-22 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10632:
---

 Summary: Raft client should push all committed data to listeners
 Key: KAFKA-10632
 URL: https://issues.apache.org/jira/browse/KAFKA-10632
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


We would like to move to a push model for sending committed data to the state 
machine. This simplifies the state machine a bit since it does not need to 
track its own position and poll for new data. It also allows the raft layer to 
ensure that all committed data up to the state of a leader epoch has been sent 
before allowing the state machine to begin sending writes. Finally, it allows 
us to take advantage of optimizations. For example, we can save the need to 
re-read writes that have been sent to the leader; instead, we can retain the 
data in memory and push it to the state machine after it becomes committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #172

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add some class javadoc to Admin client (#9459)

[github] MINOR: TopologyTestDriver should not require dummy parameters (#9477)


--
[...truncated 6.89 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@28de98a4,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@28de98a4,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@60ee2f6, 
timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@60ee2f6, 
timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4c552274,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4c552274,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2402673e,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2402673e,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c5c0a2,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c5c0a2,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@9ad941e, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@9ad941e, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6c0e41dd,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6c0e41dd,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5b02e0f5,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5b02e0f5,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@d14fbe7, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@d14fbe7, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@68fc833f,
 timestamped = true, caching = false, logging = false] 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-22 Thread Bill Bejeck
Hi Bruno,

While technically it's not a regression, I think this is an important fix
with a low-risk to include, so we can leave it as a blocker.

Thanks,
Bill


On Thu, Oct 22, 2020 at 1:25 PM Bruno Cadonna  wrote:

> Hi Bill,
>
> we encountered the following bug in our soak testing cluster.
>
> https://issues.apache.org/jira/browse/KAFKA-10631
>
> I classified the bug as a blocker because it caused the death of a
> stream thread. It does not seem to be a regression, though.
>
> I opened a PR to fix the bug here:
>
> https://github.com/apache/kafka/pull/9479
>
> Feel free to downgrade the priority to "Major" if you think it is not a
> blocker.
>
> Best,
> Bruno
>
> On 22.10.20 17:49, Bill Bejeck wrote:
> > Hi All,
> >
> > We've hit code freeze.  The current status for cutting an RC is there is
> > one blocker issue.  It looks like there is a fix in the works, so
> > hopefully, it will get merged early next week.
> >
> > At that point, if there are no other blockers, I proceed with the RC
> > process.
> >
> > Thanks,
> > Bill
> >
> > On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck  wrote:
> >
> >> Hi Anna,
> >>
> >> I've updated the table to only show KAFKA-10023 as going into 2.7
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner  wrote:
> >>
> >>> Hi Bill,
> >>>
> >>> Regarding KIP-612, only the first half of the KIP will get into 2.7
> >>> release: Broker-wide and per-listener connection rate limits, including
> >>> corresponding configs and metric (KAFKA-10023). I see that the table in
> >>> the
> >>> release plan tags KAFKA-10023 as "old", not sure what it refers to.
> Note
> >>> that while KIP-612 was approved prior to 2.6 release, none of the
> >>> implementation went into 2.6 release.
> >>>
> >>> The second half of the KIP that adds per-IP connection rate limiting
> will
> >>> need to be postponed (KAFKA-10024) till the following release.
> >>>
> >>> Thanks,
> >>> Anna
> >>>
> >>> On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:
> >>>
>  Hi Kowshik,
> 
>  Given that the new feature is contained in the PR and the tooling is
>  follow-on work (minor work, but that's part of the submitted PR), I
> >>> think
>  this is fine.
> 
>  Thanks,
>  BIll
> 
>  On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam <
> kpraka...@confluent.io
> 
>  wrote:
> 
> > Hey Bill,
> >
> > For KIP-584 , we are
> >>> in
>  the
> > process of reviewing/merging the write path PR into AK trunk:
> > https://github.com/apache/kafka/pull/9001 . As far as the KIP goes,
> >>> this
> > PR
> > is a major milestone. The PR merge will hopefully be done before EOD
> > tomorrow in time for the feature freeze. Beyond this PR, couple
> things
>  are
> > left to be completed for this KIP: (1) tooling support and (2)
>  implementing
> > support for feature version deprecation in the broker . In
> particular,
>  (1)
> > is important for this KIP and the code changes are external to the
> >>> broker
> > (since it is a separate tool we intend to build). As of now, we won't
> >>> be
> > able to merge the tooling changes before feature freeze date. Would
> >>> it be
> > ok to merge the tooling changes before code freeze on 10/22? The
> >>> tooling
> > requirements are explained here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> > 
> >
> >
> 
> >>>
> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
> >
> > I would love to hear thoughts from Boyang and Jun as well.
> >
> >
> > Thanks,
> > Kowshik
> >
> >
> >
> > On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck 
> wrote:
> >
> >> Hi John,
> >>
> >> I've updated the list of expected KIPs for 2.7.0 with KIP-478.
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Mon, Oct 5, 2020 at 11:26 AM John Roesler 
> > wrote:
> >>
> >>> Hi Bill,
> >>>
> >>> Sorry about this, but I've just noticed that KIP-478 is
> >>> missing from the list. The url is:
> >>>
> >>>
> >>
> >
> 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> >>>
> >>> The KIP was accepted a long time ago, and the implementation
> >>> has been trickling in since 2.6 branch cut. However, most of
> >>> the public API implementation is done now, so I think at
> >>> this point, we can call it "released in 2.7.0". I'll make
> >>> sure it's done by feature freeze.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
>  All,
> 
>  With the KIP acceptance deadline passing yesterday, I've updated
>  the
>  planned KIP content section of the 2.7.0 release plan
> 

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-22 Thread Ron Dagostino
Hi again, Colin.  Related to the issue of how to communicate fenced
vs. not fenced (and how to communicate or imply Offline vs not
Offline), do we need to communicate in the log that a broker has
gracefully shut down?  We do distinguish between a broker being
unavailable due to a controlled shutdown vs. having died -- one that
dies appears in the metadata as being Offline, whereas one that
gracefully shuts down does not.  I don't see a way to imply this
gracefully-shutdown state, though I may be missing it.

Ron

On Thu, Oct 22, 2020 at 1:32 PM Ron Dagostino  wrote:
>
> HI Colin.  A FencedBrokerRecord appears in the metadata log when a
> broker is fenced.  What appears in the metadata log to indicate that a
> broker is no longer fenced?  Does a BrokerRecord appear?  That seems
> to communicate a bunch of unnecessary data in this context (endpoints,
> features, rack).  If that is the current plan, might it be better to
> include a Boolean value within FencedBrokerRecord and specify true
> when the broker becomes fenced and false when it is no longer fenced?
>
> Also, is a fenced broker considered Offline for partition metadata
> purposes?  I think so but would like to confirm.
>
> Ron
>
> On Wed, Oct 21, 2020 at 9:05 AM Colin McCabe  wrote:
> >
> > On Tue, Oct 13, 2020, at 18:30, Jun Rao wrote:
> > > Hi, Colin,
> > >
> > > Thanks for the reply. A few more comments below.
> > >
> > > 80.1 controller.listener.names only defines the name of the listener. The
> > > actual listener including host/port/security_protocol is typically defined
> > > in advertised_listners. Does that mean advertised_listners is a required
> > > config now?
> > >
> >
> > Hi Jun,
> >
> > Thanks for the re-review.
> >
> > The controller listeners are not advertised to clients.  So I think they 
> > should go in listeners, rather than advertised.listeners.
> >
> > I agree that this makes listeners a required configuration.  At very least, 
> > it is required to have the controller listener in there.
> >
> > >
> > > 83.1 broker state machine: It seems that we should transition from FENCED
> > > => INITIAL since only INITIAL generates new broker epoch?
> > >
> >
> > I changed the broker state machine a bit-- take a look.  In the new state 
> > machine, the FENCED state can re-register.
> >
> > >
> > > 83.5. It's true that the controller node doesn't serve metadata requests.
> > > However, there are admin requests such as topic creation/deletion are sent
> > > to the controller directly. So, it seems that the client needs to know
> > > the controller host/port?
> > >
> >
> > The client sends admin requests to a random broker, which forwards them to 
> > the controller.  This is described in KIP-590.
> >
> > >
> > > 85. "I was hoping that we could avoid responding to requests when the
> > > broker was fenced." This issue is that if we don't send a response, the
> > > client won't know the reason and can't act properly.
> > >
> >
> > Sorry, when I said "does not respond" I meant it would return an error.
> >
> > >
> > > 88. CurMetadataOffset: I was thinking that we may want to
> > > use CurMetadataOffset to compute the MetadataLag. Since HWM is exclusive,
> > > it's more convenient if CurMetadataOffset is also exclusive.
> > >
> >
> > OK, I will change it to NextMetadataOffset and make it exclusive.
> >
> > > 90. It would be useful to add a rejected section on why separate 
> > > controller
> > > and broker id is preferred over just broker id. For example, the following
> > > are some potential reasons. (a) We can guard duplicated brokerID, but it's
> > > hard to guard against duplicated controllerId. (b) brokerID can be auto
> > > assigned in the future, but controllerId is hard to be generated
> > > automatically.
> > >
> >
> > OK.  I added a rejected alternatives section about sharing IDs between 
> > multiple nodes.
> >
> > Colin
> >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Oct 12, 2020 at 11:14 AM Colin McCabe  wrote:
> > >
> > > > On Tue, Oct 6, 2020, at 16:09, Jun Rao wrote:
> > > > > Hi, Colin,
> > > > >
> > > > > Thanks for the reply. Made another pass of the KIP. A few more 
> > > > > comments
> > > > > below.
> > > > >
> > > >
> > > > Hi Jun,
> > > >
> > > > Thanks for the review.
> > > >
> > > > > 55. We discussed earlier why the current behavior where we favor the
> > > > > current broker registration is better. Have you given this more 
> > > > > thought?
> > > > >
> > > >
> > > > Yes, I think we should favor the current broker registration, as you
> > > > suggested earlier.
> > > >
> > > > > 80. Config related.
> > > > > 80.1 Currently, each broker only has the following 3 required 
> > > > > configs. It
> > > > > will be useful to document the required configs post KIP-500 (in both 
> > > > > the
> > > > > dedicated and shared controller mode).
> > > > > broker.id
> > > > > log.dirs
> > > > > zookeeper.connect
> > > >
> > > > For the broker, these configs will be required:
> > > >
> > > > broker.id
> > > 

Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #32

2020-10-22 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-22 Thread Ron Dagostino
HI Colin.  A FencedBrokerRecord appears in the metadata log when a
broker is fenced.  What appears in the metadata log to indicate that a
broker is no longer fenced?  Does a BrokerRecord appear?  That seems
to communicate a bunch of unnecessary data in this context (endpoints,
features, rack).  If that is the current plan, might it be better to
include a Boolean value within FencedBrokerRecord and specify true
when the broker becomes fenced and false when it is no longer fenced?

Also, is a fenced broker considered Offline for partition metadata
purposes?  I think so but would like to confirm.

Ron

On Wed, Oct 21, 2020 at 9:05 AM Colin McCabe  wrote:
>
> On Tue, Oct 13, 2020, at 18:30, Jun Rao wrote:
> > Hi, Colin,
> >
> > Thanks for the reply. A few more comments below.
> >
> > 80.1 controller.listener.names only defines the name of the listener. The
> > actual listener including host/port/security_protocol is typically defined
> > in advertised_listners. Does that mean advertised_listners is a required
> > config now?
> >
>
> Hi Jun,
>
> Thanks for the re-review.
>
> The controller listeners are not advertised to clients.  So I think they 
> should go in listeners, rather than advertised.listeners.
>
> I agree that this makes listeners a required configuration.  At very least, 
> it is required to have the controller listener in there.
>
> >
> > 83.1 broker state machine: It seems that we should transition from FENCED
> > => INITIAL since only INITIAL generates new broker epoch?
> >
>
> I changed the broker state machine a bit-- take a look.  In the new state 
> machine, the FENCED state can re-register.
>
> >
> > 83.5. It's true that the controller node doesn't serve metadata requests.
> > However, there are admin requests such as topic creation/deletion are sent
> > to the controller directly. So, it seems that the client needs to know
> > the controller host/port?
> >
>
> The client sends admin requests to a random broker, which forwards them to 
> the controller.  This is described in KIP-590.
>
> >
> > 85. "I was hoping that we could avoid responding to requests when the
> > broker was fenced." This issue is that if we don't send a response, the
> > client won't know the reason and can't act properly.
> >
>
> Sorry, when I said "does not respond" I meant it would return an error.
>
> >
> > 88. CurMetadataOffset: I was thinking that we may want to
> > use CurMetadataOffset to compute the MetadataLag. Since HWM is exclusive,
> > it's more convenient if CurMetadataOffset is also exclusive.
> >
>
> OK, I will change it to NextMetadataOffset and make it exclusive.
>
> > 90. It would be useful to add a rejected section on why separate controller
> > and broker id is preferred over just broker id. For example, the following
> > are some potential reasons. (a) We can guard duplicated brokerID, but it's
> > hard to guard against duplicated controllerId. (b) brokerID can be auto
> > assigned in the future, but controllerId is hard to be generated
> > automatically.
> >
>
> OK.  I added a rejected alternatives section about sharing IDs between 
> multiple nodes.
>
> Colin
>
> > Thanks,
> >
> > Jun
> >
> > On Mon, Oct 12, 2020 at 11:14 AM Colin McCabe  wrote:
> >
> > > On Tue, Oct 6, 2020, at 16:09, Jun Rao wrote:
> > > > Hi, Colin,
> > > >
> > > > Thanks for the reply. Made another pass of the KIP. A few more comments
> > > > below.
> > > >
> > >
> > > Hi Jun,
> > >
> > > Thanks for the review.
> > >
> > > > 55. We discussed earlier why the current behavior where we favor the
> > > > current broker registration is better. Have you given this more thought?
> > > >
> > >
> > > Yes, I think we should favor the current broker registration, as you
> > > suggested earlier.
> > >
> > > > 80. Config related.
> > > > 80.1 Currently, each broker only has the following 3 required configs. 
> > > > It
> > > > will be useful to document the required configs post KIP-500 (in both 
> > > > the
> > > > dedicated and shared controller mode).
> > > > broker.id
> > > > log.dirs
> > > > zookeeper.connect
> > >
> > > For the broker, these configs will be required:
> > >
> > > broker.id
> > > log.dirs
> > > process.roles
> > > controller.listener.names
> > > controller.connect
> > >
> > > For the controller, these configs will be required:
> > >
> > > controller.id
> > > log.dirs
> > > process.roles
> > > controller.listener.names
> > > controller.connect
> > >
> > > For broker+controller, it will be the union of these two, which
> > > essentially means we need both broker.id and controller.id, but all
> > > others are the same as standalone.
> > >
> > > > 80.2 It would be useful to document all deprecated configs post KIP-500.
> > > > For example, all zookeeper.* are obviously deprecated. But there could 
> > > > be
> > > > others. For example, since we don't plan to support auto broker id
> > > > generation, it seems broker.id.generation.enable is deprecated too.
> > > > 80.3 Could we make it clear that 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-22 Thread Bruno Cadonna

Hi Bill,

we encountered the following bug in our soak testing cluster.

https://issues.apache.org/jira/browse/KAFKA-10631

I classified the bug as a blocker because it caused the death of a 
stream thread. It does not seem to be a regression, though.


I opened a PR to fix the bug here:

https://github.com/apache/kafka/pull/9479

Feel free to downgrade the priority to "Major" if you think it is not a 
blocker.


Best,
Bruno

On 22.10.20 17:49, Bill Bejeck wrote:

Hi All,

We've hit code freeze.  The current status for cutting an RC is there is
one blocker issue.  It looks like there is a fix in the works, so
hopefully, it will get merged early next week.

At that point, if there are no other blockers, I proceed with the RC
process.

Thanks,
Bill

On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck  wrote:


Hi Anna,

I've updated the table to only show KAFKA-10023 as going into 2.7

Thanks,
Bill

On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner  wrote:


Hi Bill,

Regarding KIP-612, only the first half of the KIP will get into 2.7
release: Broker-wide and per-listener connection rate limits, including
corresponding configs and metric (KAFKA-10023). I see that the table in
the
release plan tags KAFKA-10023 as "old", not sure what it refers to. Note
that while KIP-612 was approved prior to 2.6 release, none of the
implementation went into 2.6 release.

The second half of the KIP that adds per-IP connection rate limiting will
need to be postponed (KAFKA-10024) till the following release.

Thanks,
Anna

On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:


Hi Kowshik,

Given that the new feature is contained in the PR and the tooling is
follow-on work (minor work, but that's part of the submitted PR), I

think

this is fine.

Thanks,
BIll

On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam 
Hey Bill,

For KIP-584 , we are

in

the

process of reviewing/merging the write path PR into AK trunk:
https://github.com/apache/kafka/pull/9001 . As far as the KIP goes,

this

PR
is a major milestone. The PR merge will hopefully be done before EOD
tomorrow in time for the feature freeze. Beyond this PR, couple things

are

left to be completed for this KIP: (1) tooling support and (2)

implementing

support for feature version deprecation in the broker . In particular,

(1)

is important for this KIP and the code changes are external to the

broker

(since it is a separate tool we intend to build). As of now, we won't

be

able to merge the tooling changes before feature freeze date. Would

it be

ok to merge the tooling changes before code freeze on 10/22? The

tooling

requirements are explained here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584






%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport


I would love to hear thoughts from Boyang and Jun as well.


Thanks,
Kowshik



On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:


Hi John,

I've updated the list of expected KIPs for 2.7.0 with KIP-478.

Thanks,
Bill

On Mon, Oct 5, 2020 at 11:26 AM John Roesler 

wrote:



Hi Bill,

Sorry about this, but I've just noticed that KIP-478 is
missing from the list. The url is:









https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API


The KIP was accepted a long time ago, and the implementation
has been trickling in since 2.6 branch cut. However, most of
the public API implementation is done now, so I think at
this point, we can call it "released in 2.7.0". I'll make
sure it's done by feature freeze.

Thanks,
-John

On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:

All,

With the KIP acceptance deadline passing yesterday, I've updated

the

planned KIP content section of the 2.7.0 release plan
<









https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629


.

Removed proposed KIPs for 2.7.0 not getting approval

1. KIP-653
<









https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2


2. KIP-608
<









https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer


3. KIP-508
<









https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable



KIPs added

1. KIP-671
<









https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler




Please let me know if I've missed anything.

Thanks,
Bill

On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck 

wrote:



Hi All,

Just a reminder that the KIP freeze is next Wednesday,

September

30th.

Any KIP aiming to go in the 2.7.0 release needs to be

accepted by

this

date.


Thanks,
BIll

On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck <

bbej...@gmail.com>

wrote:



Boyan,

Done. Thanks for the heads up.

-Bill

On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <

reluctanthero...@gmail.com>


[jira] [Created] (KAFKA-10631) ProducerFencedException is not Handled on Offest Commit

2020-10-22 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-10631:
-

 Summary: ProducerFencedException is not Handled on Offest Commit
 Key: KAFKA-10631
 URL: https://issues.apache.org/jira/browse/KAFKA-10631
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.7.0
Reporter: Bruno Cadonna


The transaction manager does currently not handle producer fenced errors 
returned from a offset commit request.

We found this bug because we saw the following exception in our soak cluster:

{code:java}
org.apache.kafka.streams.errors.StreamsException: Error encountered trying to 
commit a transaction [stream-thread [i-037c09b3c48522d8d-StreamThread-3] task 
[0_0]]
at 
org.apache.kafka.streams.processor.internals.StreamsProducer.commitTransaction(StreamsProducer.java:256)
at 
org.apache.kafka.streams.processor.internals.TaskManager.commitOffsetsOrTransaction(TaskManager.java:1050)
at 
org.apache.kafka.streams.processor.internals.TaskManager.commit(TaskManager.java:1013)
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:886)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:677)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:512)
[2020-10-22T04:09:54+02:00] 
(streams-soak-2-7-eos-alpha_soak_i-037c09b3c48522d8d_streamslog) Caused by: 
org.apache.kafka.common.KafkaException: Unexpected error in 
TxnOffsetCommitResponse: There is a newer producer with the same 
transactionalId which fences the current one.
at 
org.apache.kafka.clients.producer.internals.TransactionManager$TxnOffsetCommitHandler.handleResponse(TransactionManager.java:1726)
at 
org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1278)
at 
org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at 
org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:584)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:576)
at 
org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:415)
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-665 Kafka Connect Hash SMT

2020-10-22 Thread Brandon Brown
Hey Gunnar,

Those are great questions!

1) I went with it only selecting top level fields since it seems like that’s 
the way most of the out of the box SMTS work, however I could see a lot of 
value in it supporting nested fields. 
2) I had not thought about adding salt but I think that would be a valid option 
as well. 

I think I’ll update the KIP to reflect those suggestions. One more, do you 
think this should allow a regex for fields or stick with the explicit naming of 
the fields?

Thanks for the great feedback

Brandon Brown

> On Oct 22, 2020, at 12:40 PM, Gunnar Morling 
>  wrote:
> 
> Hey Brandon,
> 
> I think that's an interesting idea, we got something as a built-in
> connector feature in Debezium, too [1]. Two questions:
> 
> * Can "field" select nested fields, e.g. "after.email"?
> * Did you consider an option for specifying salt for the hash functions?
> 
> --Gunnar
> 
> [1]
> https://debezium.io/documentation/reference/connectors/mysql.html#mysql-property-column-mask-hash
> 
> 
> 
>> Am Do., 22. Okt. 2020 um 12:53 Uhr schrieb Brandon Brown <
>> bran...@bbrownsound.com>:
>> 
>> Gonna give this another little bump. :)
>> 
>> Brandon Brown
>> 
>>> On Oct 15, 2020, at 12:51 PM, Brandon Brown 
>> wrote:
>>> 
>>> 
>>> As I mentioned in the KIP, this transformer is slightly different from
>> the current MaskField SMT.
>>> 
 Currently there exists a MaskField SMT but that would completely remove
>> the value by setting it to an equivalent null value. One problem with this
>> would be that you’d not be able to know in the case of say a password going
>> through the mask transform it would become "" which could mean that no
>> password was present in the message, or it was removed. However this hash
>> transformer would remove this ambiguity if that makes sense. The proposed
>> hash functions would be MD5, SHA1, SHA256. which are all supported via
>> MessageDigest.
>>> 
>>> Given this take on things do you still think there would be value in
>> this smt?
>>> 
>>> 
>>> Brandon Brown
 On Oct 15, 2020, at 12:36 PM, Ning Zhang 
>> wrote:
 
 Hello, I think this SMT feature is parallel to
>> https://docs.confluent.io/current/connect/transforms/index.html
 
>> On 2020/10/15 15:24:51, Brandon Brown 
>> wrote:
> Bumping this thread.
> Please take a look at the KIP and vote or let me know if you have any
>> feedback.
> 
> KIP:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
> 
> Proposed: https://github.com/apache/kafka/pull/9057
> 
> Thanks
> 
> Brandon Brown
> 
>>> On Oct 8, 2020, at 10:30 PM, Brandon Brown 
>> wrote:
>> 
>> Just wanted to give another bump on this and see if anyone had any
>> comments.
>> 
>> Thanks!
>> 
>> Brandon Brown
>> 
>>> On Oct 1, 2020, at 9:11 AM, "bran...@bbrownsound.com" <
>> bran...@bbrownsound.com> wrote:
>>> 
>>> Hey Kafka Developers,
>>> 
>>> I’ve created the following KIP and updated it based on feedback from
>> Mickael. I was wondering if we could get a vote on my proposal and move
>> forward with the proposed pr.
>>> 
>>> Thanks so much!
>>> -Brandon
> 
>> 


Re: [VOTE] KIP-665 Kafka Connect Hash SMT

2020-10-22 Thread Gunnar Morling
Hey Brandon,

I think that's an interesting idea, we got something as a built-in
connector feature in Debezium, too [1]. Two questions:

* Can "field" select nested fields, e.g. "after.email"?
* Did you consider an option for specifying salt for the hash functions?

--Gunnar

[1]
https://debezium.io/documentation/reference/connectors/mysql.html#mysql-property-column-mask-hash



Am Do., 22. Okt. 2020 um 12:53 Uhr schrieb Brandon Brown <
bran...@bbrownsound.com>:

> Gonna give this another little bump. :)
>
> Brandon Brown
>
> > On Oct 15, 2020, at 12:51 PM, Brandon Brown 
> wrote:
> >
> > 
> > As I mentioned in the KIP, this transformer is slightly different from
> the current MaskField SMT.
> >
> >> Currently there exists a MaskField SMT but that would completely remove
> the value by setting it to an equivalent null value. One problem with this
> would be that you’d not be able to know in the case of say a password going
> through the mask transform it would become "" which could mean that no
> password was present in the message, or it was removed. However this hash
> transformer would remove this ambiguity if that makes sense. The proposed
> hash functions would be MD5, SHA1, SHA256. which are all supported via
> MessageDigest.
> >
> > Given this take on things do you still think there would be value in
> this smt?
> >
> >
> > Brandon Brown
> >> On Oct 15, 2020, at 12:36 PM, Ning Zhang 
> wrote:
> >>
> >> Hello, I think this SMT feature is parallel to
> https://docs.confluent.io/current/connect/transforms/index.html
> >>
>  On 2020/10/15 15:24:51, Brandon Brown 
> wrote:
> >>> Bumping this thread.
> >>> Please take a look at the KIP and vote or let me know if you have any
> feedback.
> >>>
> >>> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
> >>>
> >>> Proposed: https://github.com/apache/kafka/pull/9057
> >>>
> >>> Thanks
> >>>
> >>> Brandon Brown
> >>>
> > On Oct 8, 2020, at 10:30 PM, Brandon Brown 
> wrote:
> 
>  Just wanted to give another bump on this and see if anyone had any
> comments.
> 
>  Thanks!
> 
>  Brandon Brown
> 
> > On Oct 1, 2020, at 9:11 AM, "bran...@bbrownsound.com" <
> bran...@bbrownsound.com> wrote:
> >
> > Hey Kafka Developers,
> >
> > I’ve created the following KIP and updated it based on feedback from
> Mickael. I was wondering if we could get a vote on my proposal and move
> forward with the proposed pr.
> >
> > Thanks so much!
> > -Brandon
> >>>
>


Requesting Permission to submit Kafka Improvement Proposals

2020-10-22 Thread Aron Tigor Möllers
Hello Kafka Devs,

I hereby kindly request permission to submit KIPs in the Wiki as described
in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

Thank you and kind regards

Aron Tigor Moellers


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-22 Thread Bill Bejeck
Hi All,

We've hit code freeze.  The current status for cutting an RC is there is
one blocker issue.  It looks like there is a fix in the works, so
hopefully, it will get merged early next week.

At that point, if there are no other blockers, I proceed with the RC
process.

Thanks,
Bill

On Wed, Oct 7, 2020 at 12:10 PM Bill Bejeck  wrote:

> Hi Anna,
>
> I've updated the table to only show KAFKA-10023 as going into 2.7
>
> Thanks,
> Bill
>
> On Tue, Oct 6, 2020 at 6:51 PM Anna Povzner  wrote:
>
>> Hi Bill,
>>
>> Regarding KIP-612, only the first half of the KIP will get into 2.7
>> release: Broker-wide and per-listener connection rate limits, including
>> corresponding configs and metric (KAFKA-10023). I see that the table in
>> the
>> release plan tags KAFKA-10023 as "old", not sure what it refers to. Note
>> that while KIP-612 was approved prior to 2.6 release, none of the
>> implementation went into 2.6 release.
>>
>> The second half of the KIP that adds per-IP connection rate limiting will
>> need to be postponed (KAFKA-10024) till the following release.
>>
>> Thanks,
>> Anna
>>
>> On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:
>>
>> > Hi Kowshik,
>> >
>> > Given that the new feature is contained in the PR and the tooling is
>> > follow-on work (minor work, but that's part of the submitted PR), I
>> think
>> > this is fine.
>> >
>> > Thanks,
>> > BIll
>> >
>> > On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam > >
>> > wrote:
>> >
>> > > Hey Bill,
>> > >
>> > > For KIP-584 , we are
>> in
>> > the
>> > > process of reviewing/merging the write path PR into AK trunk:
>> > > https://github.com/apache/kafka/pull/9001 . As far as the KIP goes,
>> this
>> > > PR
>> > > is a major milestone. The PR merge will hopefully be done before EOD
>> > > tomorrow in time for the feature freeze. Beyond this PR, couple things
>> > are
>> > > left to be completed for this KIP: (1) tooling support and (2)
>> > implementing
>> > > support for feature version deprecation in the broker . In particular,
>> > (1)
>> > > is important for this KIP and the code changes are external to the
>> broker
>> > > (since it is a separate tool we intend to build). As of now, we won't
>> be
>> > > able to merge the tooling changes before feature freeze date. Would
>> it be
>> > > ok to merge the tooling changes before code freeze on 10/22? The
>> tooling
>> > > requirements are explained here:
>> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
>> > > 
>> > >
>> > >
>> >
>> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
>> > >
>> > > I would love to hear thoughts from Boyang and Jun as well.
>> > >
>> > >
>> > > Thanks,
>> > > Kowshik
>> > >
>> > >
>> > >
>> > > On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:
>> > >
>> > > > Hi John,
>> > > >
>> > > > I've updated the list of expected KIPs for 2.7.0 with KIP-478.
>> > > >
>> > > > Thanks,
>> > > > Bill
>> > > >
>> > > > On Mon, Oct 5, 2020 at 11:26 AM John Roesler 
>> > > wrote:
>> > > >
>> > > > > Hi Bill,
>> > > > >
>> > > > > Sorry about this, but I've just noticed that KIP-478 is
>> > > > > missing from the list. The url is:
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
>> > > > >
>> > > > > The KIP was accepted a long time ago, and the implementation
>> > > > > has been trickling in since 2.6 branch cut. However, most of
>> > > > > the public API implementation is done now, so I think at
>> > > > > this point, we can call it "released in 2.7.0". I'll make
>> > > > > sure it's done by feature freeze.
>> > > > >
>> > > > > Thanks,
>> > > > > -John
>> > > > >
>> > > > > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
>> > > > > > All,
>> > > > > >
>> > > > > > With the KIP acceptance deadline passing yesterday, I've updated
>> > the
>> > > > > > planned KIP content section of the 2.7.0 release plan
>> > > > > > <
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
>> > > > > >
>> > > > > > .
>> > > > > >
>> > > > > > Removed proposed KIPs for 2.7.0 not getting approval
>> > > > > >
>> > > > > >1. KIP-653
>> > > > > ><
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
>> > > > > >
>> > > > > >2. KIP-608
>> > > > > ><
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
>> > > > > >
>> > > > > >3. KIP-508
>> > > > > ><
>> > > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
>> > > > > >
>> > > > > >
>> > > > > > KIPs added
>> > > > > >
>> > > > > >1. KIP-671
>> > > > > ><
>> > > > >
>> > > >
>> > >
>> >
>> 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #163

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Add some class javadoc to Admin client (#9459)

[github] MINOR: TopologyTestDriver should not require dummy parameters (#9477)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED


[jira] [Resolved] (KAFKA-10284) Group membership update due to static member rejoin should be persisted

2020-10-22 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10284.
--
Resolution: Fixed

> Group membership update due to static member rejoin should be persisted
> ---
>
> Key: KAFKA-10284
> URL: https://issues.apache.org/jira/browse/KAFKA-10284
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.6.0
>Reporter: Boyang Chen
>Assignee: feyman
>Priority: Critical
>  Labels: help-wanted
> Fix For: 2.7.0
>
> Attachments: How to reproduce the issue in KAFKA-10284.md
>
>
> For known static members rejoin, we would update its corresponding member.id 
> without triggering a new rebalance. This serves the purpose for avoiding 
> unnecessary rebalance for static membership, as well as fencing purpose if 
> some still uses the old member.id. 
> The bug is that we don't actually persist the membership update, so if no 
> upcoming rebalance gets triggered, this new member.id information will get 
> lost during group coordinator immigration, thus bringing up the zombie member 
> identity.
> The bug find credit goes to [~hachikuji] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10630) State Directory config could be improved

2020-10-22 Thread John Roesler (Jira)
John Roesler created KAFKA-10630:


 Summary: State Directory config could be improved
 Key: KAFKA-10630
 URL: https://issues.apache.org/jira/browse/KAFKA-10630
 Project: Kafka
  Issue Type: Task
  Components: streams
Reporter: John Roesler


During [https://github.com/apache/kafka/pull/9477,] I noticed that many tests 
wind up providing a state directory config purely to ensure a unique temp 
directory for the test. Since TopologyTestDriver and MockProcessorContext tests 
are typically unit tests, it would be more convenient to initialize those 
components with their own unique temp state directory, following the universal 
pattern from such tests:
{code:java}
props.setProperty(StreamsConfig.STATE_DIR_CONFIG, 
TestUtils.tempDirectory().getAbsolutePath()); {code}
Note that this literal setting is not ideal, since it actually creates a 
directory regardless of whether the application needs one. Instead, we should 
create a new TestUtil method to lazily generate a temp directory _name_ and 
then register a shutdown handler to delete it if it exists. Then, Streams would 
only create the directory if it actually needs persistence.

Also, the default value for that config is not platform independent. It is 
simply: {color:#067d17}"/tmp/kafka-streams"{color}. Perhaps instead we should 
set the default to something like "unset" or "" or "none". Then, instead of 
reading the property directly, when Streams actually needs the state directory, 
it could log a warning that the state directory config is not set and call the 
platform-independent Java api for creating a temporary directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10629) TopologyTestDriver should not require a Properties arg

2020-10-22 Thread John Roesler (Jira)
John Roesler created KAFKA-10629:


 Summary: TopologyTestDriver should not require a Properties arg
 Key: KAFKA-10629
 URL: https://issues.apache.org/jira/browse/KAFKA-10629
 Project: Kafka
  Issue Type: Task
  Components: streams, streams-test-utils
Reporter: John Roesler


As of [https://github.com/apache/kafka/pull/9477,] many TopologyTestDriver 
usages will have no configurations at all to specify, so we should provide a 
constructor that doesn't take a Properties argument. Right now, such 
configuration-free usages have to provide an empty Properties object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10628) Follow-up: Remove all unnecessary dummy TopologyTestDriver configs

2020-10-22 Thread John Roesler (Jira)
John Roesler created KAFKA-10628:


 Summary: Follow-up: Remove all unnecessary dummy 
TopologyTestDriver configs
 Key: KAFKA-10628
 URL: https://issues.apache.org/jira/browse/KAFKA-10628
 Project: Kafka
  Issue Type: Task
Reporter: John Roesler


After [https://github.com/apache/kafka/pull/9477,] we no longer need to specify 
dummy values for bootstrap servers and application id when creating a 
TopologyTestDriver.

 

This task is to track down all those unnecessary parameters and delete them. 
You can consult the above pull request for some examples of this.

 

Note that there are times when the application id is actually significant, 
since it is used in conjunction with the state directory to give the driver a 
unique place to store local state. On the other hand, it would be sufficient to 
just set a unique state directory and not bother with the app id in that case.

 

During review, [~chia7712] pointed out that this comment 
([https://github.com/apache/kafka/blob/trunk/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java#L138])
 can be removed since it is not necessary anymore. (It's the mention of the 
dummy params from the javadoc of the TopologyTestDriver)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #171

2020-10-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10627) Connect TimestampConverter transform does not support multiple formats for the same field and only allows one field to be transformed at a time

2020-10-22 Thread Joshua Grisham (Jira)
Joshua Grisham created KAFKA-10627:
--

 Summary: Connect TimestampConverter transform does not support 
multiple formats for the same field and only allows one field to be transformed 
at a time
 Key: KAFKA-10627
 URL: https://issues.apache.org/jira/browse/KAFKA-10627
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Joshua Grisham


Some of the limitations of the *TimestampConverter* transform are causing 
issues for us since we have a lot of different producers from different systems 
producing events to some of our topics.  We try our best to have governance on 
the data formats including strict usage of Avro schemas but there are still 
variations in timestamp data types that are allowed by the schema.

In the end there will be multiple formats coming into the same timestamp fields 
(for example, with and without milliseconds, with and without a timezone 
specifier, etc).

And then you get failed events in Connect with messages like this:
{noformat}
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
handler
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperatorror(RetryWithToleranceOperator.java:178)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at 
org.apache.ntime.TransformationChain.apply(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
at 
org.aect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.corkerSinkTask.iteration(WorkerSinkTask.java:228)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
at org.apache.kafka.connect.runtime.WorrkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
atrrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$WorolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Could not parse 
timestamp: value (2020-10-06T12:12:27h pattern (-MM-dd'T'HH:mm:ss.SSSX)
at 
org.apache.kafka.connect.transforms.TimestampConverter$1.toRaw(TimestampConverter.java:120)
at 
org.apache.kafka.connect.transformrter.convertTimestamp(TimestampConverter.java:450)
at 
org.apache.kafka.connect.transforms.TimestampConverter.applyValueWithSchema(TimestampConverter.java:375)
at 
org.apachtransforms.TimestampConverter.applyWithSchema(TimestampConverter.java:362)
at 
org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:279)
at 
.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithT.java:128)
at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 14 more
Caused by: java.text.Unparseable date: \"2020-10-06T12:12:27Z\"
at java.text.DateFormat.parse(DateFormat.java:366)
at 
org.apache.kafka.connect.transforms.TimestampConverter$1.toRaw(TimestampCo)
... 21 more
{noformat}
 

My thinking is that maybe a good solution is to switch from using 
*java.util.Date* to instead using *java.util.Time*, then instead of 
*SimpleDateFormatter* switch to *DateTimeFormatter* which will allow usage of 
more sophisticated patterns in the config to match multiple different allowable 
formats.

For example instead of effectively doing this:
{code:java}
SimpleDateFormat format = new 
SimpleDateFormat("-MM-dd'T'HH:mm:ss.SSSX");{code}
It can be something like this:
{code:java}
DateTimeFormatter format = DateTimeFormatter.ofPattern("[-MM-dd[['T'][ 
]HH:mm:ss[.SSSz][.SSS[XXX][X");{code}
Also if there are multiple timestamp fields in the schema/events, then today 
you have to chain multiple *TimestampConverter* transforms together but I can 
see a little bit of a performance impact if there are many timestamps on large 
events and the topic has a lot of events coming through.

So it would be great actually if the field name could instead be a 
comma-separated list of field names (much like you can use with *Cast*, 
*ReplaceField*, etc transforms) and then it will just loop through each field 
in the list and apply the same logic (parse field based on string and give 
requested output type).

 



--

Re: [VOTE] KIP-676: Respect the logging hierarchy

2020-10-22 Thread Dongjin Lee
Binding: 3 (John, Gwen, Mickael)
Non-binding: 2 (Dongjin, Brandon)

This KIP is now accepted. Thanks for participating!

Best,
Dongjin

On Thu, Oct 22, 2020 at 7:52 PM Brandon Brown 
wrote:

> +1 (non binding)
>
> Brandon Brown
> > On Oct 22, 2020, at 5:03 AM, Mickael Maison 
> wrote:
> >
> > +1 (binding)
> > Thanks
> >
> >> On Sun, Oct 11, 2020 at 8:09 AM Dongjin Lee  wrote:
> >>
> >> Binding: 2 (John, Gwen)
> >> Non-binding: 1 (Dongjin)
> >>
> >> We need one more binding now.
> >>
> >> Regards,
> >> Dongjin
> >>
> >>> On Sun, Oct 11, 2020 at 3:05 PM Gwen Shapira 
> wrote:
> >>>
> >>> +1 (binding)
> >>>
> >>> Makes sense, thank you.
> >>>
>  On Fri, Oct 9, 2020, 2:50 AM Tom Bentley  wrote:
> >>>
>  Hi all,
> 
>  KIP-676 is pretty trivial and the comments on the discussion thread
> seem
> >>> to
>  be favourable, so I'd like to start a vote on it.
> 
> 
> 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
>  <
> 
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> >
> 
>  Please take a look if you have time.
> 
>  Many thanks,
> 
>  Tom
> 
> >>>
> >>
> >>
> >> --
> >> *Dongjin Lee*
> >>
> >> *A hitchhiker in the mathematical world.*
> >>
> >>
> >>
> >>
> >> *github:  github.com/dongjinleekr
> >> keybase:
> https://keybase.io/dongjinleekr
> >> linkedin:
> kr.linkedin.com/in/dongjinleekr
> >> speakerdeck:
> speakerdeck.com/dongjin
> >> *
>
-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [VOTE] KIP-665 Kafka Connect Hash SMT

2020-10-22 Thread Brandon Brown
Gonna give this another little bump. :)

Brandon Brown

> On Oct 15, 2020, at 12:51 PM, Brandon Brown  wrote:
> 
> 
> As I mentioned in the KIP, this transformer is slightly different from the 
> current MaskField SMT. 
> 
>> Currently there exists a MaskField SMT but that would completely remove the 
>> value by setting it to an equivalent null value. One problem with this would 
>> be that you’d not be able to know in the case of say a password going 
>> through the mask transform it would become "" which could mean that no 
>> password was present in the message, or it was removed. However this hash 
>> transformer would remove this ambiguity if that makes sense. The proposed 
>> hash functions would be MD5, SHA1, SHA256. which are all supported via 
>> MessageDigest.
> 
> Given this take on things do you still think there would be value in this smt?
> 
> 
> Brandon Brown 
>> On Oct 15, 2020, at 12:36 PM, Ning Zhang  wrote:
>> 
>> Hello, I think this SMT feature is parallel to 
>> https://docs.confluent.io/current/connect/transforms/index.html
>> 
 On 2020/10/15 15:24:51, Brandon Brown  wrote: 
>>> Bumping this thread.
>>> Please take a look at the KIP and vote or let me know if you have any 
>>> feedback.
>>> 
>>> KIP: 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-665%3A+Kafka+Connect+Hash+SMT
>>> 
>>> Proposed: https://github.com/apache/kafka/pull/9057
>>> 
>>> Thanks
>>> 
>>> Brandon Brown
>>> 
> On Oct 8, 2020, at 10:30 PM, Brandon Brown  
> wrote:
 
 Just wanted to give another bump on this and see if anyone had any 
 comments. 
 
 Thanks!
 
 Brandon Brown
 
> On Oct 1, 2020, at 9:11 AM, "bran...@bbrownsound.com" 
>  wrote:
> 
> Hey Kafka Developers,
> 
> I’ve created the following KIP and updated it based on feedback from 
> Mickael. I was wondering if we could get a vote on my proposal and move 
> forward with the proposed pr.
> 
> Thanks so much!
> -Brandon
>>> 


Re: [VOTE] KIP-676: Respect the logging hierarchy

2020-10-22 Thread Brandon Brown
+1 (non binding)

Brandon Brown
> On Oct 22, 2020, at 5:03 AM, Mickael Maison  wrote:
> 
> +1 (binding)
> Thanks
> 
>> On Sun, Oct 11, 2020 at 8:09 AM Dongjin Lee  wrote:
>> 
>> Binding: 2 (John, Gwen)
>> Non-binding: 1 (Dongjin)
>> 
>> We need one more binding now.
>> 
>> Regards,
>> Dongjin
>> 
>>> On Sun, Oct 11, 2020 at 3:05 PM Gwen Shapira  wrote:
>>> 
>>> +1 (binding)
>>> 
>>> Makes sense, thank you.
>>> 
 On Fri, Oct 9, 2020, 2:50 AM Tom Bentley  wrote:
>>> 
 Hi all,
 
 KIP-676 is pretty trivial and the comments on the discussion thread seem
>>> to
 be favourable, so I'd like to start a vote on it.
 
 
 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
 <
 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> 
 
 Please take a look if you have time.
 
 Many thanks,
 
 Tom
 
>>> 
>> 
>> 
>> --
>> *Dongjin Lee*
>> 
>> *A hitchhiker in the mathematical world.*
>> 
>> 
>> 
>> 
>> *github:  github.com/dongjinleekr
>> keybase: https://keybase.io/dongjinleekr
>> linkedin: kr.linkedin.com/in/dongjinleekr
>> speakerdeck: speakerdeck.com/dongjin
>> *


Re: [DISCUSS] Checkstyle Import Order

2020-10-22 Thread Brandon Brown
I like option 2. 

Brandon Brown

> On Oct 15, 2020, at 1:36 AM, Dongjin Lee  wrote:
> 
> Hello. I hope to open a discussion about the import order in Java code.
> 
> As Nikolay stated recently[^1], Kafka uses a relatively strict code style
> for Java code. However, it misses any rule on import order. For this
> reason, the code formatting settings of every local dev environment are
> different from person to person, resulting in the countless meaningless
> import order changes in the PR.
> 
> For example, in `NamedCache.java` in the streams module, the `java.*`
> imports are split into two chunks, embracing the other imports between
> them. So, I propose to define an import order to prevent these kinds of
> cases in the future.
> 
> To define the import order, we have to regard the following three
> orthogonal issues beforehand:
> 
> a. How to group the type imports?
> b. Whether to sort the imports alphabetically?
> c. Where to place static imports: above the type imports, or below them.
> 
> Since b and c seem relatively straightforward (sort the imports
> alphabetically and place the static imports below the type imports), I hope
> to focus the available alternatives on the problem a.
> 
> I evaluated the following alternatives and checked how many files are get
> effected for each case. (based on commit 1457cc652) And here are the
> results:
> 
> *1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*
> 
> ```
>
>   value="kafka,/^org\.apache\.kafka.*$/,*,javax,java"/>
>  
>  
>  
>  
>
> ```
> 
> *2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*
> 
> ```
>
>  
>  
>  
>  
>  
>
> ```
> 
> *3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*
> 
> ```
>
>   value="(kafka|org\.apache\.kafka),*,javax,java"/>
>  
>  
>  
>  
>
> ```
> 
> *4. *, javax? (2 groups): 707 files.*
> 
> ```
>
>  
>  
>  
>  
>  
>
> ```
> 
> *5. javax?, * (2 groups): 1822 files.*
> 
> ```
>
>  
>  
>  
>  
>  
>
> ```
> 
> *6. java, javax, * (3 groups): 1809 files.*
> 
> ```
>
>  
>  
>  
>  
>  
>
> ```
> 
> I hope to get some feedback on this issue here.
> 
> For the WIP PR, please refer here: https://github.com/apache/kafka/pull/8404
> 
> Best,
> Dongjin
> 
> [^1]:
> https://lists.apache.org/thread.html/r2bbee24b8a459842a0fc840c6e40958e7158d29f3f2d6c0d223be80b%40%3Cdev.kafka.apache.org%3E
> [^2]:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/NamedCache.java
> 
> -- 
> *Dongjin Lee*
> 
> *A hitchhiker in the mathematical world.*
> 
> 
> 
> 
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *


Re: [DISCUSS] Checkstyle Import Order

2020-10-22 Thread Bruno Cadonna

Hi Dongjin,

Thank you for bringing this up.

I like options 2 and 3.

Best,
Bruno

On 15.10.20 07:36, Dongjin Lee wrote:

Hello. I hope to open a discussion about the import order in Java code.

As Nikolay stated recently[^1], Kafka uses a relatively strict code style
for Java code. However, it misses any rule on import order. For this
reason, the code formatting settings of every local dev environment are
different from person to person, resulting in the countless meaningless
import order changes in the PR.

For example, in `NamedCache.java` in the streams module, the `java.*`
imports are split into two chunks, embracing the other imports between
them. So, I propose to define an import order to prevent these kinds of
cases in the future.

To define the import order, we have to regard the following three
orthogonal issues beforehand:

a. How to group the type imports?
b. Whether to sort the imports alphabetically?
c. Where to place static imports: above the type imports, or below them.

Since b and c seem relatively straightforward (sort the imports
alphabetically and place the static imports below the type imports), I hope
to focus the available alternatives on the problem a.

I evaluated the following alternatives and checked how many files are get
effected for each case. (based on commit 1457cc652) And here are the
results:

*1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*

```
 
   
   
   
   
   
 
```

*2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*

```
 
   
   
   
   
   
 
```

*3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*

```
 
   
   
   
   
   
 
```

*4. *, javax? (2 groups): 707 files.*

```
 
   
   
   
   
   
 
```

*5. javax?, * (2 groups): 1822 files.*

```
 
   
   
   
   
   
 
```

*6. java, javax, * (3 groups): 1809 files.*

```
 
   
   
   
   
   
 
```

I hope to get some feedback on this issue here.

For the WIP PR, please refer here: https://github.com/apache/kafka/pull/8404

Best,
Dongjin

[^1]:
https://lists.apache.org/thread.html/r2bbee24b8a459842a0fc840c6e40958e7158d29f3f2d6c0d223be80b%40%3Cdev.kafka.apache.org%3E
[^2]:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/NamedCache.java



[jira] [Created] (KAFKA-10626) Add support for max.in.flight.requests.per.connection behaviour in MockClient

2020-10-22 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-10626:
--

 Summary: Add support for max.in.flight.requests.per.connection 
behaviour  in MockClient
 Key: KAFKA-10626
 URL: https://issues.apache.org/jira/browse/KAFKA-10626
 Project: Kafka
  Issue Type: Task
  Components: unit tests
Reporter: Rajini Sivaram


We currently don't have an easy way to test 
max.in.flight.requests.per.connection behaviour in unit tests. It will be 
useful to add this for tests like the one in KAFKA-10520 with max.in.flight=1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-676: Respect the logging hierarchy

2020-10-22 Thread Mickael Maison
+1 (binding)
Thanks

On Sun, Oct 11, 2020 at 8:09 AM Dongjin Lee  wrote:
>
> Binding: 2 (John, Gwen)
> Non-binding: 1 (Dongjin)
>
> We need one more binding now.
>
> Regards,
> Dongjin
>
> On Sun, Oct 11, 2020 at 3:05 PM Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > Makes sense, thank you.
> >
> > On Fri, Oct 9, 2020, 2:50 AM Tom Bentley  wrote:
> >
> > > Hi all,
> > >
> > > KIP-676 is pretty trivial and the comments on the discussion thread seem
> > to
> > > be favourable, so I'd like to start a vote on it.
> > >
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
> > > <
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy?moved=true
> > > >
> > >
> > > Please take a look if you have time.
> > >
> > > Many thanks,
> > >
> > > Tom
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #31

2020-10-22 Thread Apache Jenkins Server
See 


Changes:

[Bill Bejeck] KAFKA-10520; Ensure transactional producers poll if 
leastLoadedNode not available with max.in.flight=1 (#9406)


--
[...truncated 6.86 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

2.7 recent commits

2020-10-22 Thread Sophie Blee-Goldman
Hey all,

Just a heads up that two recent commits on the 2.7 branch have had their
commit ids changed (1

& 2
).
The commits themselves are the same, but if you checked out a local copy of
2.7 earlier today then you may need to get a fresh branch from upstream.

Let Bill or I know if you have any questions

Happy code freeze!
-Sophie