Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Konstantine Karantasis
Congratulations, Sophie!

Konstantine

On Mon, Oct 19, 2020 at 6:33 PM Luke Chen  wrote:

> Congratulations, Sophie!
> You always provide good review comments for my PRs.
> Well deserved!
>
> Luke
>
> On Tue, Oct 20, 2020 at 9:23 AM Rankesh Kumar  wrote:
>
> > Many congratulations, Sophie.
> >
> > Best regards,
> > Rankesh
> >
> >
> > > On 20-Oct-2020, at 12:34 AM, Gwen Shapira  wrote:
> > >
> > > Congratulations, Sophie!
> > >
> > > On Mon, Oct 19, 2020 at 9:41 AM Matthias J. Sax 
> > wrote:
> > >>
> > >> Hi all,
> > >>
> > >> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > >> invitation to become an Apache Kafka committer.
> > >>
> > >> Sophie is actively contributing to Kafka since Feb 2019 and has
> > >> accumulated 140 commits. She authored 4 KIPs in the lead
> > >>
> > >> - KIP-453: Add close() method to RocksDBConfigSetter
> > >> - KIP-445: In-memory Session Store
> > >> - KIP-428: Add in-memory window store
> > >> - KIP-613: Add end-to-end latency metrics to Streams
> > >>
> > >> and helped to implement two critical KIPs, 429 (incremental
> rebalancing)
> > >> and 441 (smooth auto-scaling; not just implementation but also
> design).
> > >>
> > >> In addition, she participates in basically every Kafka Streams related
> > >> KIP discussion, reviewed 142 PRs, and is active on the user mailing
> > list.
> > >>
> > >> Thanks for all the contributions, Sophie!
> > >>
> > >>
> > >> Please join me to congratulate her!
> > >> -Matthias
> > >>
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Engineering Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> >
> >
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Konstantine Karantasis
Congrats, Chia-Ping!

Konstantine

On Mon, Oct 19, 2020 at 6:23 PM Rankesh Kumar  wrote:

> Many congratulations, Chia-Ping!
>
> Best regards,
> Rankesh
>
> > On 20-Oct-2020, at 6:45 AM, Luke Chen  wrote:
> >
> > Congratulations! Chia-Ping大大!
> > Well deserved!
> >
> > Luke
> >
> > On Tue, Oct 20, 2020 at 2:30 AM Mickael Maison  >
> > wrote:
> >
> >> Congrats Chia-Ping!
> >>
> >> On Mon, Oct 19, 2020 at 8:29 PM Ismael Juma  wrote:
> >>>
> >>> Congratulations Chia-Ping!
> >>>
> >>> Ismael
> >>>
> >>> On Mon, Oct 19, 2020 at 10:25 AM Guozhang Wang 
> >> wrote:
> >>>
>  Hello all,
> 
>  I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> >> to
>  become an Apache Kafka committer.
> 
>  Chia-Ping has been contributing to Kafka since March 2018 and has made
> >> 74
>  commits:
> 
>  https://github.com/apache/kafka/commits?author=chia7712
> 
>  He's also authored several major improvements, participated in the KIP
>  discussion and PR reviews as well. His major feature development
> >> includes:
> 
>  * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
>  * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
>  * KIP-331: Add default implementation to close() and configure() for
> >> serde
>  * KIP-367: Introduce close(Duration) to Producer and AdminClients
>  * KIP-338: Support to exclude the internal topics in kafka-topics.sh
>  command
> 
>  In addition, Chia-Ping has demonstrated his great diligence fixing
> test
>  failures, his impressive engineering attitude and taste in fixing
> >> tricky
>  bugs while keeping simple designs.
> 
>  Please join me to congratulate Chia-Ping for all the contributions!
> 
> 
>  -- Guozhang
> 
> >>
>
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #152

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: the top-level error message of 
AlterPartitionReassignmentsResponseData does not get propagated correctly 
(#9392)


--
[...truncated 3.41 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED


[DISCUSS] KIP-679 Producer will enable the strongest delivery guarantee by default

2020-10-19 Thread Cheng Tan
Hi all,

I’m proposing a new KIP for enabling the strongest delivery guarantee by 
default. Today Kafka support EOS and N-1 concurrent failure tolerance but the 
default settings haven’t bring them out of the box. The proposal is discussing 
the best approach to change the producer defaults to `ack=all` and 
`enable.idempotence=true`. Please join the discussion here: 

https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default
 


Thanks

- Cheng Tan

Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Luke Chen
Congratulations, Sophie!
You always provide good review comments for my PRs.
Well deserved!

Luke

On Tue, Oct 20, 2020 at 9:23 AM Rankesh Kumar  wrote:

> Many congratulations, Sophie.
>
> Best regards,
> Rankesh
>
>
> > On 20-Oct-2020, at 12:34 AM, Gwen Shapira  wrote:
> >
> > Congratulations, Sophie!
> >
> > On Mon, Oct 19, 2020 at 9:41 AM Matthias J. Sax 
> wrote:
> >>
> >> Hi all,
> >>
> >> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> >> invitation to become an Apache Kafka committer.
> >>
> >> Sophie is actively contributing to Kafka since Feb 2019 and has
> >> accumulated 140 commits. She authored 4 KIPs in the lead
> >>
> >> - KIP-453: Add close() method to RocksDBConfigSetter
> >> - KIP-445: In-memory Session Store
> >> - KIP-428: Add in-memory window store
> >> - KIP-613: Add end-to-end latency metrics to Streams
> >>
> >> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> >> and 441 (smooth auto-scaling; not just implementation but also design).
> >>
> >> In addition, she participates in basically every Kafka Streams related
> >> KIP discussion, reviewed 142 PRs, and is active on the user mailing
> list.
> >>
> >> Thanks for all the contributions, Sophie!
> >>
> >>
> >> Please join me to congratulate her!
> >> -Matthias
> >>
> >
> >
> > --
> > Gwen Shapira
> > Engineering Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
>
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Rankesh Kumar
Many congratulations, Chia-Ping!

Best regards,
Rankesh

> On 20-Oct-2020, at 6:45 AM, Luke Chen  wrote:
> 
> Congratulations! Chia-Ping大大!
> Well deserved!
> 
> Luke
> 
> On Tue, Oct 20, 2020 at 2:30 AM Mickael Maison 
> wrote:
> 
>> Congrats Chia-Ping!
>> 
>> On Mon, Oct 19, 2020 at 8:29 PM Ismael Juma  wrote:
>>> 
>>> Congratulations Chia-Ping!
>>> 
>>> Ismael
>>> 
>>> On Mon, Oct 19, 2020 at 10:25 AM Guozhang Wang 
>> wrote:
>>> 
 Hello all,
 
 I'm happy to announce that Chia-Ping Tsai has accepted his invitation
>> to
 become an Apache Kafka committer.
 
 Chia-Ping has been contributing to Kafka since March 2018 and has made
>> 74
 commits:
 
 https://github.com/apache/kafka/commits?author=chia7712
 
 He's also authored several major improvements, participated in the KIP
 discussion and PR reviews as well. His major feature development
>> includes:
 
 * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
 * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
 * KIP-331: Add default implementation to close() and configure() for
>> serde
 * KIP-367: Introduce close(Duration) to Producer and AdminClients
 * KIP-338: Support to exclude the internal topics in kafka-topics.sh
 command
 
 In addition, Chia-Ping has demonstrated his great diligence fixing test
 failures, his impressive engineering attitude and taste in fixing
>> tricky
 bugs while keeping simple designs.
 
 Please join me to congratulate Chia-Ping for all the contributions!
 
 
 -- Guozhang
 
>> 



Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Rankesh Kumar
Many congratulations, Sophie.

Best regards,
Rankesh


> On 20-Oct-2020, at 12:34 AM, Gwen Shapira  wrote:
> 
> Congratulations, Sophie!
> 
> On Mon, Oct 19, 2020 at 9:41 AM Matthias J. Sax  wrote:
>> 
>> Hi all,
>> 
>> I am excited to announce that A. Sophie Blee-Goldman has accepted her
>> invitation to become an Apache Kafka committer.
>> 
>> Sophie is actively contributing to Kafka since Feb 2019 and has
>> accumulated 140 commits. She authored 4 KIPs in the lead
>> 
>> - KIP-453: Add close() method to RocksDBConfigSetter
>> - KIP-445: In-memory Session Store
>> - KIP-428: Add in-memory window store
>> - KIP-613: Add end-to-end latency metrics to Streams
>> 
>> and helped to implement two critical KIPs, 429 (incremental rebalancing)
>> and 441 (smooth auto-scaling; not just implementation but also design).
>> 
>> In addition, she participates in basically every Kafka Streams related
>> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
>> 
>> Thanks for all the contributions, Sophie!
>> 
>> 
>> Please join me to congratulate her!
>> -Matthias
>> 
> 
> 
> -- 
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Luke Chen
Congratulations! Chia-Ping大大!
Well deserved!

Luke

On Tue, Oct 20, 2020 at 2:30 AM Mickael Maison 
wrote:

> Congrats Chia-Ping!
>
> On Mon, Oct 19, 2020 at 8:29 PM Ismael Juma  wrote:
> >
> > Congratulations Chia-Ping!
> >
> > Ismael
> >
> > On Mon, Oct 19, 2020 at 10:25 AM Guozhang Wang 
> wrote:
> >
> > > Hello all,
> > >
> > > I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> to
> > > become an Apache Kafka committer.
> > >
> > > Chia-Ping has been contributing to Kafka since March 2018 and has made
> 74
> > > commits:
> > >
> > > https://github.com/apache/kafka/commits?author=chia7712
> > >
> > > He's also authored several major improvements, participated in the KIP
> > > discussion and PR reviews as well. His major feature development
> includes:
> > >
> > > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > > * KIP-331: Add default implementation to close() and configure() for
> serde
> > > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > > command
> > >
> > > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > > failures, his impressive engineering attitude and taste in fixing
> tricky
> > > bugs while keeping simple designs.
> > >
> > > Please join me to congratulate Chia-Ping for all the contributions!
> > >
> > >
> > > -- Guozhang
> > >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #157

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fixed comment to refer to UpdateMetadataPartitionState rather 
than UpdateMetadataTopicState. (#9447)

[github] KAFKA-10605: Deprecate old PAPI registration methods (#9448)


--
[...truncated 3.20 MB...]
org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > readTaskState 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
deleteConnectorState STARTED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testThreadName 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > 
deleteConnectorState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > deleteTaskState 
STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreTest > deleteTaskState 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
STARTED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.integration.RestExtensionIntegrationTest > 
testRestExtensionApi STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
STARTED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readTopicStatus STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readTopicStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatus STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatusValue STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
readInvalidStatusValue PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
deleteTopicStatus STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
deleteTopicStatus PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicState STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicState PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateNonRetriableFailure STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateNonRetriableFailure PASSED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateShouldOverridePreviousState STARTED

org.apache.kafka.connect.storage.KafkaStatusBackingStoreFormatTest > 
putTopicStateShouldOverridePreviousState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #185

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10605: Deprecate old PAPI registration methods (#9448)


--
[...truncated 1.89 MB...]
kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testWithOfflinePreferredReplica PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > testNoPartitionsGiven 
STARTED

kafka.admin.FeatureCommandTest > testDowngradeFeaturesSuccess PASSED

kafka.admin.FeatureCommandTest > testDescribeFeaturesSuccess STARTED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > testNoPartitionsGiven 
PASSED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testMultipleBrokersGiven STARTED

kafka.admin.FeatureCommandTest > testDescribeFeaturesSuccess PASSED

kafka.admin.FeatureCommandTest > testUpgradeAllFeaturesSuccess STARTED

kafka.admin.FeatureCommandTest > testUpgradeAllFeaturesSuccess PASSED

kafka.admin.FeatureCommandTest > testUpgradeFeaturesFailure STARTED

kafka.admin.PreferredReplicaLeaderElectionCommandTest > 
testMultipleBrokersGiven PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable STARTED

kafka.admin.FeatureCommandTest > testUpgradeFeaturesFailure PASSED

kafka.admin.TopicCommandWithZKClientTest > testAlterPartitionCount STARTED

kafka.admin.TopicCommandWithZKClientTest > testAlterPartitionCount PASSED

kafka.admin.TopicCommandWithZKClientTest > testAlterInternalTopicPartitionCount 
STARTED

kafka.admin.TopicCommandWithZKClientTest > testAlterInternalTopicPartitionCount 
PASSED

kafka.admin.TopicCommandWithZKClientTest > 
testCreateWithNegativeReplicationFactor STARTED

kafka.admin.TopicCommandWithZKClientTest > 
testCreateWithNegativeReplicationFactor PASSED

kafka.admin.TopicCommandWithZKClientTest > 
testCreateWithInvalidReplicationFactor STARTED

kafka.admin.TopicCommandWithZKClientTest > 
testCreateWithInvalidReplicationFactor PASSED

kafka.admin.TopicCommandWithZKClientTest > testListTopicsWithExcludeInternal 
STARTED

kafka.admin.TopicCommandWithZKClientTest > testListTopicsWithExcludeInternal 
PASSED

kafka.admin.TopicCommandWithZKClientTest > testCreateWithNegativePartitionCount 
STARTED

kafka.admin.TopicCommandWithZKClientTest > testCreateWithNegativePartitionCount 
PASSED

kafka.admin.TopicCommandWithZKClientTest > testCreateIfNotExists STARTED

kafka.admin.TopicCommandWithZKClientTest > testCreateIfNotExists PASSED

kafka.admin.TopicCommandWithZKClientTest > testCreateAlterTopicWithRackAware 
STARTED

kafka.admin.TopicCommandWithZKClientTest > testCreateAlterTopicWithRackAware 
PASSED

kafka.admin.TopicCommandWithZKClientTest > testListTopicsWithIncludeList STARTED

kafka.admin.TopicCommandWithZKClientTest > testListTopicsWithIncludeList PASSED

kafka.admin.TopicCommandWithZKClientTest > testTopicDeletion STARTED

kafka.admin.TopicCommandWithZKClientTest > testTopicDeletion PASSED

kafka.admin.TopicCommandWithZKClientTest > testDescribeIfTopicNotExists STARTED

kafka.admin.TopicCommandWithZKClientTest > testDescribeIfTopicNotExists PASSED

kafka.admin.TopicCommandWithZKClientTest > testDescribeReportOverriddenConfigs 
STARTED

kafka.admin.TopicCommandWithZKClientTest > testDescribeReportOverriddenConfigs 
PASSED

kafka.admin.TopicCommandWithZKClientTest > testListTopics STARTED

kafka.admin.TopicCommandWithZKClientTest > testListTopics PASSED

kafka.admin.TopicCommandWithZKClientTest > testDeleteInternalTopic STARTED

kafka.admin.TopicCommandWithZKClientTest > testDeleteInternalTopic PASSED

kafka.admin.TopicCommandWithZKClientTest > testInvalidTopicLevelConfig STARTED

kafka.admin.TopicCommandWithZKClientTest > testInvalidTopicLevelConfig PASSED

kafka.admin.TopicCommandWithZKClientTest > testAlterConfigs STARTED

kafka.admin.TopicCommandWithZKClientTest > testAlterConfigs PASSED

kafka.admin.TopicCommandWithZKClientTest > 
testConfigPreservationAcrossPartitionAlteration STARTED

kafka.admin.TopicCommandWithZKClientTest > 
testConfigPreservationAcrossPartitionAlteration PASSED

kafka.admin.TopicCommandWithZKClientTest > 
testTopicOperationsWithRegexSymbolInTopicName STARTED

kafka.admin.TopicCommandWithZKClientTest > 
testTopicOperationsWithRegexSymbolInTopicName PASSED

kafka.admin.TopicCommandWithZKClientTest > testCreateWithConfigs STARTED

kafka.admin.TopicCommandWithZKClientTest > testCreateWithConfigs PASSED

kafka.admin.TopicCommandWithZKClientTest > testAlterIfExists STARTED

kafka.admin.TopicCommandWithZKClientTest > testAlterIfExists PASSED

kafka.admin.TopicCommandWithZKClientTest > 
testDescribeAndListTopicsWithoutInternalTopics STARTED

kafka.admin.TopicCommandWithZKClientTest > 
testDescribeAndListTopicsWithoutInternalTopics PASSED

kafka.admin.TopicCommandWithZKClientTest > 

Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #38

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-10455: Ensure that probing rebalances always occur (#9383)


--
[...truncated 4.98 MB...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[26] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[27] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[27] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[28] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[28] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[29] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[29] PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases STARTED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany STARTED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > testEntry STARTED

kafka.log.OffsetIndexTest > testEntry PASSED

kafka.log.OffsetIndexTest > testSanityLastOffsetEqualToBaseOffset STARTED

kafka.log.OffsetIndexTest > testSanityLastOffsetEqualToBaseOffset PASSED

kafka.log.OffsetIndexTest > forceUnmapTest STARTED

kafka.log.OffsetIndexTest > forceUnmapTest PASSED

kafka.log.OffsetIndexTest > testFetchUpperBoundOffset STARTED

kafka.log.OffsetIndexTest > testFetchUpperBoundOffset PASSED

kafka.log.OffsetIndexTest > randomLookupTest STARTED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testEntryOverflow STARTED

kafka.log.OffsetIndexTest > testEntryOverflow PASSED

kafka.log.OffsetIndexTest > testReopen STARTED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder STARTED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate STARTED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogManagerTest > testFileReferencesAfterAsyncDelete STARTED

kafka.log.LogManagerTest > testFileReferencesAfterAsyncDelete PASSED

kafka.log.LogManagerTest > testCreateLogWithLogDirFallback STARTED

kafka.log.LogManagerTest > testCreateLogWithLogDirFallback PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize STARTED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
STARTED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testCreateLogWithInvalidLogDir STARTED

kafka.log.LogManagerTest > testCreateLogWithInvalidLogDir PASSED

kafka.log.LogManagerTest > testTopicConfigChangeUpdatesLogConfig STARTED

kafka.log.LogManagerTest > testTopicConfigChangeUpdatesLogConfig PASSED

kafka.log.LogManagerTest > testGetNonExistentLog STARTED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testConfigChangeGetsCleanedUp STARTED

kafka.log.LogManagerTest > testConfigChangeGetsCleanedUp PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails STARTED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment STARTED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments STARTED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints STARTED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testBrokerConfigChangeDeliveredToAllLogs STARTED

kafka.log.LogManagerTest > testBrokerConfigChangeDeliveredToAllLogs PASSED

kafka.log.LogManagerTest > testCheckpointForOnlyAffectedLogs STARTED

kafka.log.LogManagerTest > testCheckpointForOnlyAffectedLogs PASSED

kafka.log.LogManagerTest > testTimeBasedFlush STARTED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog STARTED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactPolicy STARTED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactPolicy PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash STARTED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testCreateAndDeleteOverlyLongTopic STARTED

kafka.log.LogManagerTest > testCreateAndDeleteOverlyLongTopic PASSED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy STARTED

kafka.log.LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy PASSED

kafka.log.LogManagerTest > testConfigChangesWithNoLogGettingInitialized STARTED

kafka.log.LogManagerTest > testConfigChangesWithNoLogGettingInitialized PASSED

kafka.log.TransactionIndexTest > testTruncate STARTED

kafka.log.TransactionIndexTest > testTruncate PASSED


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #20

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-10605: Deprecate old PAPI registration methods (#9448)

[Jun Rao] MINOR: Update jdk and maven names in Jenkinsfile (#9453)


--
[...truncated 1.98 MB...]

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeWithWildcardAcls 
PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.GroupEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.GroupEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.UserClientIdQuotaTest > testThrottledRequest PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.api.GroupEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.GroupEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaSubscribe 
STARTED

kafka.api.GroupEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslAdminIntegrationTest > 
testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads STARTED

kafka.api.SslAdminIntegrationTest > 
testAsynchronousAuthorizerAclUpdatesDontBlockRequestThreads PASSED

kafka.api.SslAdminIntegrationTest > 
testSynchronousAuthorizerAclUpdatesBlockRequestThreads STARTED

kafka.api.SslAdminIntegrationTest > 
testSynchronousAuthorizerAclUpdatesBlockRequestThreads PASSED

kafka.api.SslAdminIntegrationTest > testAclUpdatesUsingAsynchronousAuthorizer 
STARTED

kafka.api.SslAdminIntegrationTest > testAclUpdatesUsingAsynchronousAuthorizer 
PASSED

kafka.api.SslAdminIntegrationTest > testAclUpdatesUsingSynchronousAuthorizer 
STARTED

kafka.api.SslAdminIntegrationTest > testAclUpdatesUsingSynchronousAuthorizer 
PASSED

kafka.api.SslAdminIntegrationTest > testAclDescribe STARTED

kafka.api.SslAdminIntegrationTest > testAclDescribe PASSED

kafka.api.SslAdminIntegrationTest > testLegacyAclOpsNeverAffectOrReturnPrefixed 
STARTED

kafka.api.SslAdminIntegrationTest > testLegacyAclOpsNeverAffectOrReturnPrefixed 
PASSED

kafka.api.SslAdminIntegrationTest > testCreateTopicsResponseMetadataAndConfig 
STARTED

kafka.api.SslAdminIntegrationTest > testCreateTopicsResponseMetadataAndConfig 
PASSED

kafka.api.SslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.SslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SslAdminIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.SslAdminIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SslAdminIntegrationTest > testAclOperations STARTED

kafka.api.SslAdminIntegrationTest > testAclOperations PASSED

kafka.api.SslAdminIntegrationTest > testAclOperations2 STARTED

kafka.api.SslAdminIntegrationTest > testAclOperations2 PASSED

kafka.api.SslAdminIntegrationTest > testAclDelete STARTED

kafka.api.SslAdminIntegrationTest > testAclDelete PASSED

kafka.api.SslAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.SslAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SslAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.SslAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeWithPrefixedAcls 
PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #151

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update jdk and maven names in Jenkinsfile (#9453)

[github] KAFKA-9274: Add timeout handling for state restore and StandbyTasks 
(#9368)

[github] KAFKA-10455: Ensure that probing rebalances always occur (#9383)

[github] MINOR: Fixed comment to refer to UpdateMetadataPartitionState rather 
than UpdateMetadataTopicState. (#9447)

[github] KAFKA-10605: Deprecate old PAPI registration methods (#9448)


--
[...truncated 1.85 MB...]
kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testValidationOfBatchesWithNonSequentialInnerOffsets STARTED

kafka.log.LogValidatorTest > 
testValidationOfBatchesWithNonSequentialInnerOffsets PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testInvalidTimestampExceptionHasBatchIndex STARTED

kafka.log.LogValidatorTest > testInvalidTimestampExceptionHasBatchIndex PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testMisMatchMagic STARTED

kafka.log.LogValidatorTest > testMisMatchMagic PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 PASSED

kafka.log.LogValidatorTest > 

Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #19

2020-10-19 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #156

2020-10-19 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #37

2020-10-19 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10605) KIP-478: deprecate the replaced Processor API members

2020-10-19 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10605.
--
Resolution: Fixed

> KIP-478: deprecate the replaced Processor API members
> -
>
> Key: KAFKA-10605
> URL: https://issues.apache.org/jira/browse/KAFKA-10605
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.7.0
>
>
> This is a minor task, but we shouldn't do the release without it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #150

2020-10-19 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10619) Producer will enable EOS by default

2020-10-19 Thread Cheng Tan (Jira)
Cheng Tan created KAFKA-10619:
-

 Summary: Producer will enable EOS by default
 Key: KAFKA-10619
 URL: https://issues.apache.org/jira/browse/KAFKA-10619
 Project: Kafka
  Issue Type: Improvement
Reporter: Cheng Tan
Assignee: Cheng Tan


This is an after-work for KIP-185. 

In the producer config,
 # the default value of `acks` will change to `all`
 # `enable.idempotence` will change to `true`

[An analysis of the impact of max.in.flight.requests.per.connection and acks on 
Producer 
performance|https://cwiki.apache.org/confluence/display/KAFKA/An+analysis+of+the+impact+of+max.in.flight.requests.per.connection+and+acks+on+Producer+performance]
 indicates that changing `acks` from `1` to `all` won't increase the latency 
and decrease the throughput in a significant way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #18

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10332: Update MM2 refreshTopicPartitions() logic (#9343)


--
[...truncated 2.84 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #183

2020-10-19 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10455) Probing rebalances are not guaranteed to be triggered by non-leader members

2020-10-19 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10455.
--
Resolution: Fixed

> Probing rebalances are not guaranteed to be triggered by non-leader members
> ---
>
> Key: KAFKA-10455
> URL: https://issues.apache.org/jira/browse/KAFKA-10455
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: A. Sophie Blee-Goldman
>Assignee: Leah Thomas
>Priority: Blocker
> Fix For: 2.7.0, 2.6.1
>
>
> Apparently, if a consumer rejoins the group with the same subscription 
> userdata that it previously sent, it will not trigger a rebalance. The one 
> exception here is that the group leader will always trigger a rebalance when 
> it rejoins the group.
> This has implications for KIP-441, where we rely on asking an arbitrary 
> thread to enforce the followup probing rebalances. Technically we do ask a 
> thread living on the same instance as the leader, so the odds that the leader 
> will be chosen aren't completely abysmal, but for any multithreaded 
> application they are still at best only 50%.
> Of course in general the userdata will have changed within a span of 10 
> minutes, so the actual likelihood of hitting this is much lower –  it can 
> only happen if the member's task offset sums remained unchanged. 
> Realistically, this probably requires that the member only have 
> fully-restored active tasks (encoded with the constant sentinel -2) and that 
> no tasks be added or removed.
>  
> One solution would be to make sure the leader is responsible for the probing 
> rebalance. To do this, we would need to somehow expose the memberId of the 
> thread's main consumer to the partition assignor. I'm actually not sure if 
> that's currently possible to figure out or not. If not, we could just assign 
> the probing rebalance to every thread on the leader's instance. This 
> shouldn't result in multiple followup rebalances as the rebalance schedule 
> will be updated/reset on the first followup rebalance.
> Another solution would be to make sure the userdata is always different. We 
> could encode an extra bit that flip-flops, but then we'd have to persist the 
> latest value somewhere/somehow. Alternatively we could just encode the next 
> probing rebalance time in the subscription userdata, since that is guaranteed 
> to always be different from the previous rebalance. This might get tricky 
> though, and certainly wastes space in the subscription userdata. Also, this 
> would only solve the problem for KIP-441 probing rebalances, meaning we'd 
> have to individually ensure the userdata has changed for every type of 
> followup rebalance (see related issue below). So the first proposal, 
> requiring the leader trigger the rebalance, would be preferable.
> Note that, imho, we should just allow anyone to trigger a rebalance by 
> rejoining the group. But this would presumably require a broker-side change 
> and thus we would still need a workaround for KIP-441 to work with brokers.
>  
> Related issue:
> This also means the Streams workaround for [KAFKA-9821|http://example.com] is 
> not airtight, as we encode the followup rebalance in the member who is 
> supposed to _receive_ a revoked partition, rather than the member who is 
> actually revoking said partition. While the member doing the revoking will be 
> guaranteed to have different userdata, the member receiving the partition may 
> not. Making it the responsibility of the leader to trigger _any_ type of 
> followup rebalance would solve this issue as well.
> Note that other types of followup rebalance (version probing, static 
> membership with host info change) are guaranteed to have a change in the 
> subscription userdata, and will not hit this bug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-10-19 Thread John Roesler
Thanks, Walker.

That change looks good to me.

-John

On Mon, 2020-10-19 at 12:06 -0700, Walker Carlson wrote:
> Hello all,
> 
> Taking into account the feedback about that last change I have removed some
> of the changes and no longer will we have a separate handler for the global
> thread. To make it so we can align the handlers there will also be no
> option to just remove a stream thread.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> 
> If you have any concerns please let me know,
> Walker
> 
> On Wed, Oct 14, 2020 at 12:51 PM John Roesler  wrote:
> 
> > Thanks, Sophie,
> > 
> > That makes sense. Should we add a whole new interface and a
> > separate kind of listener just because global threads don't
> > support restarts _yet_, though?
> > 
> > It seems like that will just widen the API surface area, and
> > in a few more months, there will be no more difference
> > between the two handlers. But people will forever afterward
> > have to register two different handlers to do the same
> > thing.
> > 
> > Two alternatives we could consider:
> > 1. Just don't add the "restart" option until it's possible
> > for all threads. This KIP is already accepted with a
> > "restart" option that we know won't be added until KIP-663
> > is done. Maybe we just wait for KIP-406 as well. But the
> > _rest_ of this KIP can be implemented in the mean time.
> > 2. Just log an error and kill the thread anyway if the
> > handler for the global thread opts to "retry".
> > 
> > In general, it seems like the problem at hand is better
> > solved by allowing/disallowing that one option, versus
> > adding a whole new interface.
> > 
> > Thanks,
> > -John
> > 
> > On Wed, 2020-10-14 at 11:48 -0700, Sophie Blee-Goldman
> > wrote:
> > > I don't think the proposal was to *never* add the "replace" functionality
> > > for
> > > the global thread, but we didn't want to tie up this KIP with anything
> > more.
> > > As I understand it, the goal of Walker's proposal was to set us up for
> > > success if/when we want to add new functionality for the global thread,
> > > without necessarily committing to it at this time.
> > > 
> > > Restarting the global thread will take a bit more work since you need to
> > > pause any further work that relies on global state until it's back up.
> > > That's
> > > starting to sound more in the purview of KIP-406 whose current goal
> > > is to effectively restart the global thread on a specific type of
> > exception
> > > (OffsetOutOfRange). If we want to consider expanding that to allow users
> > > to choose to restart the thread, then KIP-406 seems like the more
> > > appropriate place to engage in that discussion.
> > > 
> > > On Wed, Oct 14, 2020 at 7:13 AM John Roesler 
> > wrote:
> > > > Hello Walker,
> > > > 
> > > > Sorry for the late reply, but I didn’t follow the reasoning for the
> > > > separate handler. You said that the global thread doesn’t have
> > “replace”,
> > > > but as of today, none of the threads have “replace”. Why not add that
> > > > ability when we add it for the other threads?
> > > > 
> > > > The nature of an uncaught exception handler is that there is an
> > exception
> > > > that will kill the thread. In that case, it seems like replacement is a
> > > > desirable option.
> > > > 
> > > > What have I missed?
> > > > 
> > > > Thanks,
> > > > John
> > > > 
> > > > On Tue, Oct 13, 2020, at 15:49, Walker Carlson wrote:
> > > > > Those are good points Sophie and Matthias. I sepificed the defaults
> > in
> > > > the
> > > > > kip and standardized the names fo the handler to make them a bit more
> > > > > readable.
> > > > > 
> > > > > Thanks for the suggestions,
> > > > > Walker
> > > > > 
> > > > > On Tue, Oct 13, 2020 at 12:24 PM Sophie Blee-Goldman <
> > > > sop...@confluent.io>
> > > > > wrote:
> > > > > 
> > > > > > Super nit: can we standardize the method & enum names?
> > > > > > 
> > > > > > Right now we have these enums:
> > > > > > StreamsUncaughtExceptionHandlerResponse
> > > > > > StreamsUncaughtExceptionHandlerResponseGlobalThread
> > > > > > 
> > > > > > and these callbacks:
> > > > > > handleUncaughtException()
> > > > > > handleExceptionInGlobalThread()
> > > > > > 
> > > > > > The method names have different syntax, which is a bit clunky. I
> > don't
> > > > have
> > > > > > any
> > > > > > strong opinions on what grammar they should follow, just that it
> > > > should be
> > > > > > the
> > > > > > same for each. I also think that we should specify "StreamThread"
> > > > somewhere
> > > > > > in the name of the StreadThread-specific callback, now that we
> > have a
> > > > > > second
> > > > > > callback that specifies it's for the GlobalThread. Something like
> > > > > > "*handleStreamThreadException()*" and
> > "*handleGlobalThreadException*"
> > > > > > The enums are ok, although I think we should include "StreamThread"
> > > > > > somewhere
> > > > > > like with the callbacks. And 

Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-10-19 Thread Walker Carlson
Hello all,

Taking into account the feedback about that last change I have removed some
of the changes and no longer will we have a separate handler for the global
thread. To make it so we can align the handlers there will also be no
option to just remove a stream thread.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler

If you have any concerns please let me know,
Walker

On Wed, Oct 14, 2020 at 12:51 PM John Roesler  wrote:

> Thanks, Sophie,
>
> That makes sense. Should we add a whole new interface and a
> separate kind of listener just because global threads don't
> support restarts _yet_, though?
>
> It seems like that will just widen the API surface area, and
> in a few more months, there will be no more difference
> between the two handlers. But people will forever afterward
> have to register two different handlers to do the same
> thing.
>
> Two alternatives we could consider:
> 1. Just don't add the "restart" option until it's possible
> for all threads. This KIP is already accepted with a
> "restart" option that we know won't be added until KIP-663
> is done. Maybe we just wait for KIP-406 as well. But the
> _rest_ of this KIP can be implemented in the mean time.
> 2. Just log an error and kill the thread anyway if the
> handler for the global thread opts to "retry".
>
> In general, it seems like the problem at hand is better
> solved by allowing/disallowing that one option, versus
> adding a whole new interface.
>
> Thanks,
> -John
>
> On Wed, 2020-10-14 at 11:48 -0700, Sophie Blee-Goldman
> wrote:
> > I don't think the proposal was to *never* add the "replace" functionality
> > for
> > the global thread, but we didn't want to tie up this KIP with anything
> more.
> > As I understand it, the goal of Walker's proposal was to set us up for
> > success if/when we want to add new functionality for the global thread,
> > without necessarily committing to it at this time.
> >
> > Restarting the global thread will take a bit more work since you need to
> > pause any further work that relies on global state until it's back up.
> > That's
> > starting to sound more in the purview of KIP-406 whose current goal
> > is to effectively restart the global thread on a specific type of
> exception
> > (OffsetOutOfRange). If we want to consider expanding that to allow users
> > to choose to restart the thread, then KIP-406 seems like the more
> > appropriate place to engage in that discussion.
> >
> > On Wed, Oct 14, 2020 at 7:13 AM John Roesler 
> wrote:
> >
> > > Hello Walker,
> > >
> > > Sorry for the late reply, but I didn’t follow the reasoning for the
> > > separate handler. You said that the global thread doesn’t have
> “replace”,
> > > but as of today, none of the threads have “replace”. Why not add that
> > > ability when we add it for the other threads?
> > >
> > > The nature of an uncaught exception handler is that there is an
> exception
> > > that will kill the thread. In that case, it seems like replacement is a
> > > desirable option.
> > >
> > > What have I missed?
> > >
> > > Thanks,
> > > John
> > >
> > > On Tue, Oct 13, 2020, at 15:49, Walker Carlson wrote:
> > > > Those are good points Sophie and Matthias. I sepificed the defaults
> in
> > > the
> > > > kip and standardized the names fo the handler to make them a bit more
> > > > readable.
> > > >
> > > > Thanks for the suggestions,
> > > > Walker
> > > >
> > > > On Tue, Oct 13, 2020 at 12:24 PM Sophie Blee-Goldman <
> > > sop...@confluent.io>
> > > > wrote:
> > > >
> > > > > Super nit: can we standardize the method & enum names?
> > > > >
> > > > > Right now we have these enums:
> > > > > StreamsUncaughtExceptionHandlerResponse
> > > > > StreamsUncaughtExceptionHandlerResponseGlobalThread
> > > > >
> > > > > and these callbacks:
> > > > > handleUncaughtException()
> > > > > handleExceptionInGlobalThread()
> > > > >
> > > > > The method names have different syntax, which is a bit clunky. I
> don't
> > > have
> > > > > any
> > > > > strong opinions on what grammar they should follow, just that it
> > > should be
> > > > > the
> > > > > same for each. I also think that we should specify "StreamThread"
> > > somewhere
> > > > > in the name of the StreadThread-specific callback, now that we
> have a
> > > > > second
> > > > > callback that specifies it's for the GlobalThread. Something like
> > > > > "*handleStreamThreadException()*" and
> "*handleGlobalThreadException*"
> > > > >
> > > > > The enums are ok, although I think we should include "StreamThread"
> > > > > somewhere
> > > > > like with the callbacks. And we can probably shorten them a bit.
> For
> > > > > example
> > > > > "*StreamThreadExceptionResponse*" and
> "*GlobalThreadExceptionResponse*"
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Oct 13, 2020 at 11:48 AM Matthias J. Sax  >
> > > wrote:
> > > > > > Thanks Walker.
> > > > > >
> > > > > > Overall, LGTM. However, I am wondering if we should have default

Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Gwen Shapira
Congratulations, Sophie!

On Mon, Oct 19, 2020 at 9:41 AM Matthias J. Sax  wrote:
>
> Hi all,
>
> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> invitation to become an Apache Kafka committer.
>
> Sophie is actively contributing to Kafka since Feb 2019 and has
> accumulated 140 commits. She authored 4 KIPs in the lead
>
>  - KIP-453: Add close() method to RocksDBConfigSetter
>  - KIP-445: In-memory Session Store
>  - KIP-428: Add in-memory window store
>  - KIP-613: Add end-to-end latency metrics to Streams
>
> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> and 441 (smooth auto-scaling; not just implementation but also design).
>
> In addition, she participates in basically every Kafka Streams related
> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
>
> Thanks for all the contributions, Sophie!
>
>
> Please join me to congratulate her!
>  -Matthias
>


-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #18

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10332: Update MM2 refreshTopicPartitions() logic (#9343)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #155

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10332: Update MM2 refreshTopicPartitions() logic (#9343)

[github] KAFKA-10599: Implement basic CLI tool for feature versioning system 
(#9409)


--
[...truncated 3.44 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@392320d9, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@140a5785, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@140a5785, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@15f75bb6, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@15f75bb6, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4f4a32d1, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4f4a32d1, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6fcfe609, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6fcfe609, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 

Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Mickael Maison
Congrats Chia-Ping!

On Mon, Oct 19, 2020 at 8:29 PM Ismael Juma  wrote:
>
> Congratulations Chia-Ping!
>
> Ismael
>
> On Mon, Oct 19, 2020 at 10:25 AM Guozhang Wang  wrote:
>
> > Hello all,
> >
> > I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
> > become an Apache Kafka committer.
> >
> > Chia-Ping has been contributing to Kafka since March 2018 and has made 74
> > commits:
> >
> > https://github.com/apache/kafka/commits?author=chia7712
> >
> > He's also authored several major improvements, participated in the KIP
> > discussion and PR reviews as well. His major feature development includes:
> >
> > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > * KIP-331: Add default implementation to close() and configure() for serde
> > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > command
> >
> > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > failures, his impressive engineering attitude and taste in fixing tricky
> > bugs while keeping simple designs.
> >
> > Please join me to congratulate Chia-Ping for all the contributions!
> >
> >
> > -- Guozhang
> >


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Ismael Juma
Congratulations Sophie!

Ismael

On Mon, Oct 19, 2020 at 9:41 AM Matthias J. Sax  wrote:

> Hi all,
>
> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> invitation to become an Apache Kafka committer.
>
> Sophie is actively contributing to Kafka since Feb 2019 and has
> accumulated 140 commits. She authored 4 KIPs in the lead
>
>  - KIP-453: Add close() method to RocksDBConfigSetter
>  - KIP-445: In-memory Session Store
>  - KIP-428: Add in-memory window store
>  - KIP-613: Add end-to-end latency metrics to Streams
>
> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> and 441 (smooth auto-scaling; not just implementation but also design).
>
> In addition, she participates in basically every Kafka Streams related
> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
>
> Thanks for all the contributions, Sophie!
>
>
> Please join me to congratulate her!
>  -Matthias
>
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Ismael Juma
Congratulations Chia-Ping!

Ismael

On Mon, Oct 19, 2020 at 10:25 AM Guozhang Wang  wrote:

> Hello all,
>
> I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
> become an Apache Kafka committer.
>
> Chia-Ping has been contributing to Kafka since March 2018 and has made 74
> commits:
>
> https://github.com/apache/kafka/commits?author=chia7712
>
> He's also authored several major improvements, participated in the KIP
> discussion and PR reviews as well. His major feature development includes:
>
> * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> * KIP-331: Add default implementation to close() and configure() for serde
> * KIP-367: Introduce close(Duration) to Producer and AdminClients
> * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> command
>
> In addition, Chia-Ping has demonstrated his great diligence fixing test
> failures, his impressive engineering attitude and taste in fixing tricky
> bugs while keeping simple designs.
>
> Please join me to congratulate Chia-Ping for all the contributions!
>
>
> -- Guozhang
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Mickael Maison
Congratulations Sophie!

On Mon, Oct 19, 2020 at 8:00 PM James Cheng  wrote:
>
> Congratulations, Sophie!
>
> -James
>
> > On Oct 19, 2020, at 9:40 AM, Matthias J. Sax  wrote:
> >
> > Hi all,
> >
> > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > invitation to become an Apache Kafka committer.
> >
> > Sophie is actively contributing to Kafka since Feb 2019 and has
> > accumulated 140 commits. She authored 4 KIPs in the lead
> >
> > - KIP-453: Add close() method to RocksDBConfigSetter
> > - KIP-445: In-memory Session Store
> > - KIP-428: Add in-memory window store
> > - KIP-613: Add end-to-end latency metrics to Streams
> >
> > and helped to implement two critical KIPs, 429 (incremental rebalancing)
> > and 441 (smooth auto-scaling; not just implementation but also design).
> >
> > In addition, she participates in basically every Kafka Streams related
> > KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
> >
> > Thanks for all the contributions, Sophie!
> >
> >
> > Please join me to congratulate her!
> > -Matthias
> >
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-19 Thread James Cheng
Congratulations, David!

-James

> On Oct 16, 2020, at 9:01 AM, Gwen Shapira  wrote:
> 
> The PMC for Apache Kafka has invited David Jacot as a committer, and
> we are excited to say that he accepted!
> 
> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> and has been very active since August 2019. He contributed several
> notable KIPs:
> 
> KIP-511: Collect and Expose Client Name and Version in Brokers
> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> KIP-570: Add leader epoch in StopReplicaReques
> KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations
> KIP-496 Added an API for the deletion of consumer offsets
> 
> In addition, David Jacot reviewed many community contributions and
> showed great technical and architectural taste. Great reviews are hard
> and often thankless work - but this is what makes Kafka a great
> product and helps us grow our community.
> 
> Thanks for all the contributions, David! Looking forward to more
> collaboration in the Apache Kafka community.
> 
> -- 
> Gwen Shapira



Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread James Cheng
Congratulations Chia-Ping!

-James

> On Oct 19, 2020, at 10:24 AM, Guozhang Wang  wrote:
> 
> Hello all,
> 
> I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
> become an Apache Kafka committer.
> 
> Chia-Ping has been contributing to Kafka since March 2018 and has made 74
> commits:
> 
> https://github.com/apache/kafka/commits?author=chia7712
> 
> He's also authored several major improvements, participated in the KIP
> discussion and PR reviews as well. His major feature development includes:
> 
> * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> * KIP-331: Add default implementation to close() and configure() for serde
> * KIP-367: Introduce close(Duration) to Producer and AdminClients
> * KIP-338: Support to exclude the internal topics in kafka-topics.sh command
> 
> In addition, Chia-Ping has demonstrated his great diligence fixing test
> failures, his impressive engineering attitude and taste in fixing tricky
> bugs while keeping simple designs.
> 
> Please join me to congratulate Chia-Ping for all the contributions!
> 
> 
> -- Guozhang



Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread James Cheng
Congratulations, Sophie!

-James

> On Oct 19, 2020, at 9:40 AM, Matthias J. Sax  wrote:
> 
> Hi all,
> 
> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> invitation to become an Apache Kafka committer.
> 
> Sophie is actively contributing to Kafka since Feb 2019 and has
> accumulated 140 commits. She authored 4 KIPs in the lead
> 
> - KIP-453: Add close() method to RocksDBConfigSetter
> - KIP-445: In-memory Session Store
> - KIP-428: Add in-memory window store
> - KIP-613: Add end-to-end latency metrics to Streams
> 
> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> and 441 (smooth auto-scaling; not just implementation but also design).
> 
> In addition, she participates in basically every Kafka Streams related
> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
> 
> Thanks for all the contributions, Sophie!
> 
> 
> Please join me to congratulate her!
> -Matthias
> 



Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread John Roesler
Congratulations, Chia-Ping!
-John

On Mon, 2020-10-19 at 10:51 -0700, Kowshik Prakasam wrote:
> Congrats!
> 
> 
> Cheers,
> Kowshik
> 
> 
> On Mon, Oct 19, 2020 at 10:40 AM Bruno Cadonna  wrote:
> 
> > Congrats!
> > 
> > Best,
> > Bruno
> > 
> > On 19.10.20 19:39, Sophie Blee-Goldman wrote:
> > > Congrats!
> > > 
> > > On Mon, Oct 19, 2020 at 10:32 AM Bill Bejeck  wrote:
> > > 
> > > > Congratulations Chia-Ping!
> > > > 
> > > > -Bill
> > > > 
> > > > On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax 
> > wrote:
> > > > > Congrats Chia-Ping!
> > > > > 
> > > > > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > > > > Hello all,
> > > > > > 
> > > > > > I'm happy to announce that Chia-Ping Tsai has accepted his 
> > > > > > invitation
> > > > to
> > > > > > become an Apache Kafka committer.
> > > > > > 
> > > > > > Chia-Ping has been contributing to Kafka since March 2018 and has 
> > > > > > made
> > > > 74
> > > > > > commits:
> > > > > > 
> > > > > > https://github.com/apache/kafka/commits?author=chia7712
> > > > > > 
> > > > > > He's also authored several major improvements, participated in the 
> > > > > > KIP
> > > > > > discussion and PR reviews as well. His major feature development
> > > > > includes:
> > > > > > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > > > > > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > > > > > * KIP-331: Add default implementation to close() and configure() for
> > > > > serde
> > > > > > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > > > > > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > > > > command
> > > > > > In addition, Chia-Ping has demonstrated his great diligence fixing
> > test
> > > > > > failures, his impressive engineering attitude and taste in fixing
> > > > tricky
> > > > > > bugs while keeping simple designs.
> > > > > > 
> > > > > > Please join me to congratulate Chia-Ping for all the contributions!
> > > > > > 
> > > > > > 
> > > > > > -- Guozhang
> > > > > > 



Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Deepak Raghav
Congratulations

On Mon, 19 Oct, 2020, 11:02 pm Bill Bejeck,  wrote:

> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax  wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> to
> > > become an Apache Kafka committer.
> > >
> > > Chia-Ping has been contributing to Kafka since March 2018 and has made
> 74
> > > commits:
> > >
> > > https://github.com/apache/kafka/commits?author=chia7712
> > >
> > > He's also authored several major improvements, participated in the KIP
> > > discussion and PR reviews as well. His major feature development
> > includes:
> > >
> > > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > > * KIP-331: Add default implementation to close() and configure() for
> > serde
> > > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > command
> > >
> > > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > > failures, his impressive engineering attitude and taste in fixing
> tricky
> > > bugs while keeping simple designs.
> > >
> > > Please join me to congratulate Chia-Ping for all the contributions!
> > >
> > >
> > > -- Guozhang
> > >
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread John Roesler
Congratulations, Sophie!
-John

On Mon, 2020-10-19 at 10:52 -0700, Kowshik Prakasam wrote:
> Congrats Sophie!
> 
> 
> Cheers,
> Kowshik
> 
> 
> On Mon, Oct 19, 2020 at 10:31 AM Bill Bejeck  wrote:
> 
> > Congratulations Sophie!
> > 
> > -Bill
> > 
> > On Mon, Oct 19, 2020 at 12:49 PM Leah Thomas  wrote:
> > 
> > > Congrats Sophie!
> > > 
> > > On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax 
> > wrote:
> > > > Hi all,
> > > > 
> > > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > > invitation to become an Apache Kafka committer.
> > > > 
> > > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > > accumulated 140 commits. She authored 4 KIPs in the lead
> > > > 
> > > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > > >  - KIP-445: In-memory Session Store
> > > >  - KIP-428: Add in-memory window store
> > > >  - KIP-613: Add end-to-end latency metrics to Streams
> > > > 
> > > > and helped to implement two critical KIPs, 429 (incremental
> > rebalancing)
> > > > and 441 (smooth auto-scaling; not just implementation but also design).
> > > > 
> > > > In addition, she participates in basically every Kafka Streams related
> > > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> > list.
> > > > Thanks for all the contributions, Sophie!
> > > > 
> > > > 
> > > > Please join me to congratulate her!
> > > >  -Matthias
> > > > 
> > > > 



Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Kowshik Prakasam
Congrats Sophie!


Cheers,
Kowshik


On Mon, Oct 19, 2020 at 10:31 AM Bill Bejeck  wrote:

> Congratulations Sophie!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 12:49 PM Leah Thomas  wrote:
>
> > Congrats Sophie!
> >
> > On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax 
> wrote:
> >
> > > Hi all,
> > >
> > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > invitation to become an Apache Kafka committer.
> > >
> > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > accumulated 140 commits. She authored 4 KIPs in the lead
> > >
> > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > >  - KIP-445: In-memory Session Store
> > >  - KIP-428: Add in-memory window store
> > >  - KIP-613: Add end-to-end latency metrics to Streams
> > >
> > > and helped to implement two critical KIPs, 429 (incremental
> rebalancing)
> > > and 441 (smooth auto-scaling; not just implementation but also design).
> > >
> > > In addition, she participates in basically every Kafka Streams related
> > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> list.
> > >
> > > Thanks for all the contributions, Sophie!
> > >
> > >
> > > Please join me to congratulate her!
> > >  -Matthias
> > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Kowshik Prakasam
Congrats!


Cheers,
Kowshik


On Mon, Oct 19, 2020 at 10:40 AM Bruno Cadonna  wrote:

> Congrats!
>
> Best,
> Bruno
>
> On 19.10.20 19:39, Sophie Blee-Goldman wrote:
> > Congrats!
> >
> > On Mon, Oct 19, 2020 at 10:32 AM Bill Bejeck  wrote:
> >
> >> Congratulations Chia-Ping!
> >>
> >> -Bill
> >>
> >> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax 
> wrote:
> >>
> >>> Congrats Chia-Ping!
> >>>
> >>> On 10/19/20 10:24 AM, Guozhang Wang wrote:
>  Hello all,
> 
>  I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> >> to
>  become an Apache Kafka committer.
> 
>  Chia-Ping has been contributing to Kafka since March 2018 and has made
> >> 74
>  commits:
> 
>  https://github.com/apache/kafka/commits?author=chia7712
> 
>  He's also authored several major improvements, participated in the KIP
>  discussion and PR reviews as well. His major feature development
> >>> includes:
> 
>  * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
>  * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
>  * KIP-331: Add default implementation to close() and configure() for
> >>> serde
>  * KIP-367: Introduce close(Duration) to Producer and AdminClients
>  * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> >>> command
> 
>  In addition, Chia-Ping has demonstrated his great diligence fixing
> test
>  failures, his impressive engineering attitude and taste in fixing
> >> tricky
>  bugs while keeping simple designs.
> 
>  Please join me to congratulate Chia-Ping for all the contributions!
> 
> 
>  -- Guozhang
> 
> >>>
> >>
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Jorge Esteban Quilcate Otoya
Congratulations Sophie!! So well deserved.

On Mon, Oct 19, 2020 at 6:31 PM Bill Bejeck  wrote:

> Congratulations Sophie!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 12:49 PM Leah Thomas  wrote:
>
> > Congrats Sophie!
> >
> > On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax 
> wrote:
> >
> > > Hi all,
> > >
> > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > invitation to become an Apache Kafka committer.
> > >
> > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > accumulated 140 commits. She authored 4 KIPs in the lead
> > >
> > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > >  - KIP-445: In-memory Session Store
> > >  - KIP-428: Add in-memory window store
> > >  - KIP-613: Add end-to-end latency metrics to Streams
> > >
> > > and helped to implement two critical KIPs, 429 (incremental
> rebalancing)
> > > and 441 (smooth auto-scaling; not just implementation but also design).
> > >
> > > In addition, she participates in basically every Kafka Streams related
> > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> list.
> > >
> > > Thanks for all the contributions, Sophie!
> > >
> > >
> > > Please join me to congratulate her!
> > >  -Matthias
> > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Bruno Cadonna

Congrats!

Best,
Bruno

On 19.10.20 19:39, Sophie Blee-Goldman wrote:

Congrats!

On Mon, Oct 19, 2020 at 10:32 AM Bill Bejeck  wrote:


Congratulations Chia-Ping!

-Bill

On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax  wrote:


Congrats Chia-Ping!

On 10/19/20 10:24 AM, Guozhang Wang wrote:

Hello all,

I'm happy to announce that Chia-Ping Tsai has accepted his invitation

to

become an Apache Kafka committer.

Chia-Ping has been contributing to Kafka since March 2018 and has made

74

commits:

https://github.com/apache/kafka/commits?author=chia7712

He's also authored several major improvements, participated in the KIP
discussion and PR reviews as well. His major feature development

includes:


* KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
* KAFKA-8334: Spiky offsetCommit latency due to lock contention.
* KIP-331: Add default implementation to close() and configure() for

serde

* KIP-367: Introduce close(Duration) to Producer and AdminClients
* KIP-338: Support to exclude the internal topics in kafka-topics.sh

command


In addition, Chia-Ping has demonstrated his great diligence fixing test
failures, his impressive engineering attitude and taste in fixing

tricky

bugs while keeping simple designs.

Please join me to congratulate Chia-Ping for all the contributions!


-- Guozhang









Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Sophie Blee-Goldman
Congrats!

On Mon, Oct 19, 2020 at 10:32 AM Bill Bejeck  wrote:

> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax  wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce that Chia-Ping Tsai has accepted his invitation
> to
> > > become an Apache Kafka committer.
> > >
> > > Chia-Ping has been contributing to Kafka since March 2018 and has made
> 74
> > > commits:
> > >
> > > https://github.com/apache/kafka/commits?author=chia7712
> > >
> > > He's also authored several major improvements, participated in the KIP
> > > discussion and PR reviews as well. His major feature development
> > includes:
> > >
> > > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > > * KIP-331: Add default implementation to close() and configure() for
> > serde
> > > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> > command
> > >
> > > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > > failures, his impressive engineering attitude and taste in fixing
> tricky
> > > bugs while keeping simple designs.
> > >
> > > Please join me to congratulate Chia-Ping for all the contributions!
> > >
> > >
> > > -- Guozhang
> > >
> >
>


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Bill Bejeck
Congratulations Chia-Ping!

-Bill

On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax  wrote:

> Congrats Chia-Ping!
>
> On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > Hello all,
> >
> > I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
> > become an Apache Kafka committer.
> >
> > Chia-Ping has been contributing to Kafka since March 2018 and has made 74
> > commits:
> >
> > https://github.com/apache/kafka/commits?author=chia7712
> >
> > He's also authored several major improvements, participated in the KIP
> > discussion and PR reviews as well. His major feature development
> includes:
> >
> > * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> > * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> > * KIP-331: Add default implementation to close() and configure() for
> serde
> > * KIP-367: Introduce close(Duration) to Producer and AdminClients
> > * KIP-338: Support to exclude the internal topics in kafka-topics.sh
> command
> >
> > In addition, Chia-Ping has demonstrated his great diligence fixing test
> > failures, his impressive engineering attitude and taste in fixing tricky
> > bugs while keeping simple designs.
> >
> > Please join me to congratulate Chia-Ping for all the contributions!
> >
> >
> > -- Guozhang
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Bill Bejeck
Congratulations Sophie!

-Bill

On Mon, Oct 19, 2020 at 12:49 PM Leah Thomas  wrote:

> Congrats Sophie!
>
> On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax  wrote:
>
> > Hi all,
> >
> > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > invitation to become an Apache Kafka committer.
> >
> > Sophie is actively contributing to Kafka since Feb 2019 and has
> > accumulated 140 commits. She authored 4 KIPs in the lead
> >
> >  - KIP-453: Add close() method to RocksDBConfigSetter
> >  - KIP-445: In-memory Session Store
> >  - KIP-428: Add in-memory window store
> >  - KIP-613: Add end-to-end latency metrics to Streams
> >
> > and helped to implement two critical KIPs, 429 (incremental rebalancing)
> > and 441 (smooth auto-scaling; not just implementation but also design).
> >
> > In addition, she participates in basically every Kafka Streams related
> > KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
> >
> > Thanks for all the contributions, Sophie!
> >
> >
> > Please join me to congratulate her!
> >  -Matthias
> >
> >
>


[jira] [Resolved] (KAFKA-10332) MirrorMaker2 fails to detect topic if remote topic is created first

2020-10-19 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10332.
---
Fix Version/s: 2.6.1
   2.5.2
   2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch (for future 2.8 release), and cherry-picked to the 
`2.7` for inclusion in the upcoming 2.7.0, the `2.6` branch for inclusion in 
the next 2.6.1 if/when it's released, and the `2.5` branch for the next 2.5.2 
if/when it's released.

> MirrorMaker2 fails to detect topic if remote topic is created first
> ---
>
> Key: KAFKA-10332
> URL: https://issues.apache.org/jira/browse/KAFKA-10332
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.7.0, 2.5.2, 2.6.1
>
>
> Setup:
> - 2 clusters: source and target
> - Mirroring data from source to target
> - create a topic called source.mytopic on the target cluster
> - create a topic called mytopic on the source cluster
> At this point, MM2 does not start mirroring the topic.
> This also happens if you delete and recreate a topic that is being mirrored.
> The issue is in 
> [refreshTopicPartitions()|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceConnector.java#L211-L232]
>  which basically does a diff between the 2 clusters.
> When creating the topic on the source cluster last, it makes the partition 
> list of both clusters match, hence not triggering a reconfiguration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Matthias J. Sax
Congrats Chia-Ping!

On 10/19/20 10:24 AM, Guozhang Wang wrote:
> Hello all,
> 
> I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
> become an Apache Kafka committer.
> 
> Chia-Ping has been contributing to Kafka since March 2018 and has made 74
> commits:
> 
> https://github.com/apache/kafka/commits?author=chia7712
> 
> He's also authored several major improvements, participated in the KIP
> discussion and PR reviews as well. His major feature development includes:
> 
> * KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
> * KAFKA-8334: Spiky offsetCommit latency due to lock contention.
> * KIP-331: Add default implementation to close() and configure() for serde
> * KIP-367: Introduce close(Duration) to Producer and AdminClients
> * KIP-338: Support to exclude the internal topics in kafka-topics.sh command
> 
> In addition, Chia-Ping has demonstrated his great diligence fixing test
> failures, his impressive engineering attitude and taste in fixing tricky
> bugs while keeping simple designs.
> 
> Please join me to congratulate Chia-Ping for all the contributions!
> 
> 
> -- Guozhang
> 


[ANNOUNCE] New committer: Chia-Ping Tsai

2020-10-19 Thread Guozhang Wang
Hello all,

I'm happy to announce that Chia-Ping Tsai has accepted his invitation to
become an Apache Kafka committer.

Chia-Ping has been contributing to Kafka since March 2018 and has made 74
commits:

https://github.com/apache/kafka/commits?author=chia7712

He's also authored several major improvements, participated in the KIP
discussion and PR reviews as well. His major feature development includes:

* KAFKA-9654: Epoch based ReplicaAlterLogDirsThread creation.
* KAFKA-8334: Spiky offsetCommit latency due to lock contention.
* KIP-331: Add default implementation to close() and configure() for serde
* KIP-367: Introduce close(Duration) to Producer and AdminClients
* KIP-338: Support to exclude the internal topics in kafka-topics.sh command

In addition, Chia-Ping has demonstrated his great diligence fixing test
failures, his impressive engineering attitude and taste in fixing tricky
bugs while keeping simple designs.

Please join me to congratulate Chia-Ping for all the contributions!


-- Guozhang


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Guozhang Wang
Congratulations Sophie! Very well deserved.

Guozhang

On Mon, Oct 19, 2020 at 9:57 AM Tom Bentley  wrote:

> Congratulations Sophie!
>
> On Mon, Oct 19, 2020 at 5:55 PM Walker Carlson 
> wrote:
>
> > Congratulations Sophie!
> >
> > On Mon, Oct 19, 2020 at 9:43 AM Navinder Brar
> >  wrote:
> >
> > > That's great news. Congrats Sophie! Well deserved.
> > >
> > > Regards,
> > > Navinder
> > > On Monday, 19 October, 2020, 10:12:16 pm IST, Bruno Cadonna <
> > > br...@confluent.io> wrote:
> > >
> > >  Congrats Sophie! Very well deserved!
> > >
> > > Bruno
> > >
> > > On 19.10.20 18:40, Matthias J. Sax wrote:
> > > > Hi all,
> > > >
> > > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > > invitation to become an Apache Kafka committer.
> > > >
> > > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > > accumulated 140 commits. She authored 4 KIPs in the lead
> > > >
> > > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > > >  - KIP-445: In-memory Session Store
> > > >  - KIP-428: Add in-memory window store
> > > >  - KIP-613: Add end-to-end latency metrics to Streams
> > > >
> > > > and helped to implement two critical KIPs, 429 (incremental
> > rebalancing)
> > > > and 441 (smooth auto-scaling; not just implementation but also
> design).
> > > >
> > > > In addition, she participates in basically every Kafka Streams
> related
> > > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> > list.
> > > >
> > > > Thanks for all the contributions, Sophie!
> > > >
> > > >
> > > > Please join me to congratulate her!
> > > >  -Matthias
> > > >
> > >
> >
>


-- 
-- Guozhang


[GitHub] [kafka-site] tom1299 closed pull request #290: MINOR: Add missing to to testing

2020-10-19 Thread GitBox


tom1299 closed pull request #290:
URL: https://github.com/apache/kafka-site/pull/290


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax merged pull request #305: Add the best talks from Kafka Summit 2020

2020-10-19 Thread GitBox


mjsax merged pull request #305:
URL: https://github.com/apache/kafka-site/pull/305


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Tom Bentley
Congratulations Sophie!

On Mon, Oct 19, 2020 at 5:55 PM Walker Carlson 
wrote:

> Congratulations Sophie!
>
> On Mon, Oct 19, 2020 at 9:43 AM Navinder Brar
>  wrote:
>
> > That's great news. Congrats Sophie! Well deserved.
> >
> > Regards,
> > Navinder
> > On Monday, 19 October, 2020, 10:12:16 pm IST, Bruno Cadonna <
> > br...@confluent.io> wrote:
> >
> >  Congrats Sophie! Very well deserved!
> >
> > Bruno
> >
> > On 19.10.20 18:40, Matthias J. Sax wrote:
> > > Hi all,
> > >
> > > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > > invitation to become an Apache Kafka committer.
> > >
> > > Sophie is actively contributing to Kafka since Feb 2019 and has
> > > accumulated 140 commits. She authored 4 KIPs in the lead
> > >
> > >  - KIP-453: Add close() method to RocksDBConfigSetter
> > >  - KIP-445: In-memory Session Store
> > >  - KIP-428: Add in-memory window store
> > >  - KIP-613: Add end-to-end latency metrics to Streams
> > >
> > > and helped to implement two critical KIPs, 429 (incremental
> rebalancing)
> > > and 441 (smooth auto-scaling; not just implementation but also design).
> > >
> > > In addition, she participates in basically every Kafka Streams related
> > > KIP discussion, reviewed 142 PRs, and is active on the user mailing
> list.
> > >
> > > Thanks for all the contributions, Sophie!
> > >
> > >
> > > Please join me to congratulate her!
> > >  -Matthias
> > >
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Leah Thomas
Congrats Sophie!

On Mon, Oct 19, 2020 at 11:41 AM Matthias J. Sax  wrote:

> Hi all,
>
> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> invitation to become an Apache Kafka committer.
>
> Sophie is actively contributing to Kafka since Feb 2019 and has
> accumulated 140 commits. She authored 4 KIPs in the lead
>
>  - KIP-453: Add close() method to RocksDBConfigSetter
>  - KIP-445: In-memory Session Store
>  - KIP-428: Add in-memory window store
>  - KIP-613: Add end-to-end latency metrics to Streams
>
> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> and 441 (smooth auto-scaling; not just implementation but also design).
>
> In addition, she participates in basically every Kafka Streams related
> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
>
> Thanks for all the contributions, Sophie!
>
>
> Please join me to congratulate her!
>  -Matthias
>
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Walker Carlson
Congratulations Sophie!

On Mon, Oct 19, 2020 at 9:43 AM Navinder Brar
 wrote:

> That's great news. Congrats Sophie! Well deserved.
>
> Regards,
> Navinder
> On Monday, 19 October, 2020, 10:12:16 pm IST, Bruno Cadonna <
> br...@confluent.io> wrote:
>
>  Congrats Sophie! Very well deserved!
>
> Bruno
>
> On 19.10.20 18:40, Matthias J. Sax wrote:
> > Hi all,
> >
> > I am excited to announce that A. Sophie Blee-Goldman has accepted her
> > invitation to become an Apache Kafka committer.
> >
> > Sophie is actively contributing to Kafka since Feb 2019 and has
> > accumulated 140 commits. She authored 4 KIPs in the lead
> >
> >  - KIP-453: Add close() method to RocksDBConfigSetter
> >  - KIP-445: In-memory Session Store
> >  - KIP-428: Add in-memory window store
> >  - KIP-613: Add end-to-end latency metrics to Streams
> >
> > and helped to implement two critical KIPs, 429 (incremental rebalancing)
> > and 441 (smooth auto-scaling; not just implementation but also design).
> >
> > In addition, she participates in basically every Kafka Streams related
> > KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
> >
> > Thanks for all the contributions, Sophie!
> >
> >
> > Please join me to congratulate her!
> >  -Matthias
> >
>


Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Navinder Brar
That's great news. Congrats Sophie! Well deserved.

Regards,
Navinder
On Monday, 19 October, 2020, 10:12:16 pm IST, Bruno Cadonna 
 wrote:  
 
 Congrats Sophie! Very well deserved!

Bruno

On 19.10.20 18:40, Matthias J. Sax wrote:
> Hi all,
> 
> I am excited to announce that A. Sophie Blee-Goldman has accepted her
> invitation to become an Apache Kafka committer.
> 
> Sophie is actively contributing to Kafka since Feb 2019 and has
> accumulated 140 commits. She authored 4 KIPs in the lead
> 
>  - KIP-453: Add close() method to RocksDBConfigSetter
>  - KIP-445: In-memory Session Store
>  - KIP-428: Add in-memory window store
>  - KIP-613: Add end-to-end latency metrics to Streams
> 
> and helped to implement two critical KIPs, 429 (incremental rebalancing)
> and 441 (smooth auto-scaling; not just implementation but also design).
> 
> In addition, she participates in basically every Kafka Streams related
> KIP discussion, reviewed 142 PRs, and is active on the user mailing list.
> 
> Thanks for all the contributions, Sophie!
> 
> 
> Please join me to congratulate her!
>  -Matthias
> 
  

Re: [ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Bruno Cadonna

Congrats Sophie! Very well deserved!

Bruno

On 19.10.20 18:40, Matthias J. Sax wrote:

Hi all,

I am excited to announce that A. Sophie Blee-Goldman has accepted her
invitation to become an Apache Kafka committer.

Sophie is actively contributing to Kafka since Feb 2019 and has
accumulated 140 commits. She authored 4 KIPs in the lead

  - KIP-453: Add close() method to RocksDBConfigSetter
  - KIP-445: In-memory Session Store
  - KIP-428: Add in-memory window store
  - KIP-613: Add end-to-end latency metrics to Streams

and helped to implement two critical KIPs, 429 (incremental rebalancing)
and 441 (smooth auto-scaling; not just implementation but also design).

In addition, she participates in basically every Kafka Streams related
KIP discussion, reviewed 142 PRs, and is active on the user mailing list.

Thanks for all the contributions, Sophie!


Please join me to congratulate her!
  -Matthias



[ANNOUNCE] New committer: A. Sophie Blee-Goldman

2020-10-19 Thread Matthias J. Sax
Hi all,

I am excited to announce that A. Sophie Blee-Goldman has accepted her
invitation to become an Apache Kafka committer.

Sophie is actively contributing to Kafka since Feb 2019 and has
accumulated 140 commits. She authored 4 KIPs in the lead

 - KIP-453: Add close() method to RocksDBConfigSetter
 - KIP-445: In-memory Session Store
 - KIP-428: Add in-memory window store
 - KIP-613: Add end-to-end latency metrics to Streams

and helped to implement two critical KIPs, 429 (incremental rebalancing)
and 441 (smooth auto-scaling; not just implementation but also design).

In addition, she participates in basically every Kafka Streams related
KIP discussion, reviewed 142 PRs, and is active on the user mailing list.

Thanks for all the contributions, Sophie!


Please join me to congratulate her!
 -Matthias



[jira] [Created] (KAFKA-10618) Add UUID class, use in protocols

2020-10-19 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-10618:
--

 Summary: Add UUID class, use in protocols
 Key: KAFKA-10618
 URL: https://issues.apache.org/jira/browse/KAFKA-10618
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan
Assignee: Justine Olshan


Before implementing topic IDs, a public UUID class must be created and used in 
protocols



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-19 Thread Ron Dagostino
Hi Colin.  Thanks for the hard work on this KIP.

I have some questions about what happens to a broker when it becomes
fenced (e.g. because it can't send a heartbeat request to keep its
lease).  The KIP says "When a broker is fenced, it cannot process any
client requests.  This prevents brokers which are not receiving
metadata updates or that are not receiving and processing them fast
enough from causing issues to clients." And in the description of the
FENCED(4) state it likewise says "While in this state, the broker does
not respond to client requests."  It makes sense that a fenced broker
should not accept producer requests -- I assume any such requests
would result in NotLeaderOrFollowerException.  But what about KIP-392
(fetch from follower) consumer requests?  It is conceivable that these
could continue.  Related to that, would a fenced broker continue to
fetch data for partitions where it thinks it is a follower?  Even if
it rejects consumer requests it might still continue to fetch as a
follower.  Might it be helpful to clarify both decisions here?

Also, for the ScramUserRecord, since we settled on the term
"UserScramCredential" in KIP-554, might it be better named as
"UserScramCredsentialRecord"?  (Also, the name currently has a
copy-paste error and is "DelegationTokenRecord").

Ron

On Tue, Oct 13, 2020 at 9:31 PM Jun Rao  wrote:
>
> Hi, Colin,
>
> Thanks for the reply. A few more comments below.
>
> 80.1 controller.listener.names only defines the name of the listener. The
> actual listener including host/port/security_protocol is typically defined
> in advertised_listners. Does that mean advertised_listners is a required
> config now?
>
> 83.1 broker state machine: It seems that we should transition from FENCED
> => INITIAL since only INITIAL generates new broker epoch?
>
> 83.5. It's true that the controller node doesn't serve metadata requests.
> However, there are admin requests such as topic creation/deletion are sent
> to the controller directly. So, it seems that the client needs to know
> the controller host/port?
>
> 85. "I was hoping that we could avoid responding to requests when the
> broker was fenced." This issue is that if we don't send a response, the
> client won't know the reason and can't act properly.
>
> 88. CurMetadataOffset: I was thinking that we may want to
> use CurMetadataOffset to compute the MetadataLag. Since HWM is exclusive,
> it's more convenient if CurMetadataOffset is also exclusive.
>
> 90. It would be useful to add a rejected section on why separate controller
> and broker id is preferred over just broker id. For example, the following
> are some potential reasons. (a) We can guard duplicated brokerID, but it's
> hard to guard against duplicated controllerId. (b) brokerID can be auto
> assigned in the future, but controllerId is hard to be generated
> automatically.
>
> Thanks,
>
> Jun
>
> On Mon, Oct 12, 2020 at 11:14 AM Colin McCabe  wrote:
>
> > On Tue, Oct 6, 2020, at 16:09, Jun Rao wrote:
> > > Hi, Colin,
> > >
> > > Thanks for the reply. Made another pass of the KIP. A few more comments
> > > below.
> > >
> >
> > Hi Jun,
> >
> > Thanks for the review.
> >
> > > 55. We discussed earlier why the current behavior where we favor the
> > > current broker registration is better. Have you given this more thought?
> > >
> >
> > Yes, I think we should favor the current broker registration, as you
> > suggested earlier.
> >
> > > 80. Config related.
> > > 80.1 Currently, each broker only has the following 3 required configs. It
> > > will be useful to document the required configs post KIP-500 (in both the
> > > dedicated and shared controller mode).
> > > broker.id
> > > log.dirs
> > > zookeeper.connect
> >
> > For the broker, these configs will be required:
> >
> > broker.id
> > log.dirs
> > process.roles
> > controller.listener.names
> > controller.connect
> >
> > For the controller, these configs will be required:
> >
> > controller.id
> > log.dirs
> > process.roles
> > controller.listener.names
> > controller.connect
> >
> > For broker+controller, it will be the union of these two, which
> > essentially means we need both broker.id and controller.id, but all
> > others are the same as standalone.
> >
> > > 80.2 It would be useful to document all deprecated configs post KIP-500.
> > > For example, all zookeeper.* are obviously deprecated. But there could be
> > > others. For example, since we don't plan to support auto broker id
> > > generation, it seems broker.id.generation.enable is deprecated too.
> > > 80.3 Could we make it clear that controller.connect replaces
> > quorum.voters
> > > in KIP-595?
> >
> > OK.  I added a comment about this in the table.
> >
> > > 80.4 Could we document that broker.id is now optional?
> >
> > OK.  I have added a line for broker.id.
> >
> > > 80.5 The KIP suggests that controller.id is optional on the controller
> > > node. I am concerned that this can cause a bit of confusion in 2 aspects.
> > > First, in the 

Contributor permission

2020-10-19 Thread Shadi Kajevand
Hi, I read in this ticket that I can ask for contributor permission though
this email address.
https://issues.apache.org/jira/browse/KAFKA-1704

My jira id is: Kajevand

Regards

Shadi Kajevand


About KAFKA-4759: ACL authorizer subnet support (PR #9387)

2020-10-19 Thread Rafa García
Hi,

  First of all, sorry if I'm doing it wrong by contacting you directly.

  Almost two weeks ago, I implemented the support of subnets (IPv4 and
IPv6) in ACL authorizer. I commented in JIRA to the reporter but his
activity was only for that issue (2 years ago). Later, I mentioned in
GitHub to the top committers but no answer, they're really busy reviewing
other issues. I tried to contact using IRC but there was no response there
either.
I want to start with other things but first I want to leave this close.
Who could review these changes?

  Another question, is the build broken in GitHub?

Thanks!
Rafa

-- 
Enviado desde mi tabla de planchar


Re: [VOTE] KIP-516: Topic Identifiers

2020-10-19 Thread Justine Olshan
Thanks everyone for the votes. KIP-516 has been accepted.

Binding: Jun, Rajini, David
Non-binding: Lucas, Satish, Tom

Justine

On Sat, Oct 17, 2020 at 3:22 AM Tom Bentley  wrote:

> +1 non-binding. Thanks!
>
> On Sat, Oct 17, 2020 at 7:55 AM David Jacot  wrote:
>
> > Hi Justine,
> >
> > Thanks for the KIP! This is a great and long awaited improvement.
> >
> > +1 (binding)
> >
> > Best,
> > David
> >
> > Le ven. 16 oct. 2020 à 17:36, Rajini Sivaram  a
> > écrit :
> >
> > > Hi Justine,
> > >
> > > +1 (binding)
> > >
> > > Thanks for all the work you put into this KIP!
> > >
> > > btw, there is a typo in the DeleteTopics Request/Response schema in the
> > > KIP, it says Metadata request.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Fri, Oct 16, 2020 at 4:06 PM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Justine,
> > > > Thanks for the KIP,  +1 (non-binding)
> > > >
> > > > On Thu, Oct 15, 2020 at 10:48 PM Lucas Bradstreet <
> lu...@confluent.io>
> > > > wrote:
> > > > >
> > > > > Hi Justine,
> > > > >
> > > > > +1 (non-binding). Thanks for all your hard work on this KIP!
> > > > >
> > > > > Lucas
> > > > >
> > > > > On Wed, Oct 14, 2020 at 8:59 AM Jun Rao  wrote:
> > > > >
> > > > > > Hi, Justine,
> > > > > >
> > > > > > Thanks for the updated KIP. +1 from me.
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Tue, Oct 13, 2020 at 2:38 PM Jun Rao 
> wrote:
> > > > > >
> > > > > > > Hi, Justine,
> > > > > > >
> > > > > > > Thanks for starting the vote. Just a few minor comments.
> > > > > > >
> > > > > > > 1. It seems that we should remove the topic field from the
> > > > > > > StopReplicaResponse below?
> > > > > > > StopReplica Response (Version: 4) => error_code [topics]
> > > > > > >   error_code => INT16
> > > > > > > topics => topic topic_id* [partitions]
> > > > > > >
> > > > > > > 2. "After controller election, upon receiving the result,
> assign
> > > the
> > > > > > > metadata topic its unique topic ID". Will the UUID for the
> > metadata
> > > > topic
> > > > > > > be written to the metadata topic itself?
> > > > > > >
> > > > > > > 3. The vote request is designed to support multiple topics,
> each
> > of
> > > > them
> > > > > > > may require a different sentinel ID. Should we reserve more
> than
> > > one
> > > > > > > sentinel ID for future usage?
> > > > > > >
> > > > > > > 4. UUID.randomUUID(): Could we clarify whether this method
> > returns
> > > > any
> > > > > > > sentinel ID? Also, how do we expect the user to use it?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jun
> > > > > > >
> > > > > > > On Mon, Oct 12, 2020 at 9:54 AM Justine Olshan <
> > > jols...@confluent.io
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Hi all,
> > > > > > >>
> > > > > > >> After further discussion and changes to this KIP, I think we
> are
> > > > ready
> > > > > > to
> > > > > > >> restart this vote.
> > > > > > >>
> > > > > > >> Again, here is the KIP:
> > > > > > >>
> > > > > > >>
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > > > > > >>
> > > > > > >> The discussion thread is here:
> > > > > > >>
> > > > > > >>
> > > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > > > > > >>
> > > > > > >> Please take a look and vote if you have a chance.
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> Justine
> > > > > > >>
> > > > > > >> On Tue, Sep 22, 2020 at 8:52 AM Justine Olshan <
> > > > jols...@confluent.io>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >> > Hi all,
> > > > > > >> >
> > > > > > >> > I'd like to call a vote on KIP-516: Topic Identifiers. Here
> is
> > > the
> > > > > > KIP:
> > > > > > >> >
> > > > > > >> >
> > > > > > >>
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> > > > > > >> >
> > > > > > >> > The discussion thread is here:
> > > > > > >> >
> > > > > > >> >
> > > > > > >>
> > > > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> > > > > > >> >
> > > > > > >> > Please take a look and vote if you have a chance.
> > > > > > >> >
> > > > > > >> > Thank you,
> > > > > > >> > Justine
> > > > > > >> >
> > > > > > >>
> > > > > > >
> > > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #182

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10583: Add documentation on the thread-safety of 
KafkaAdminClient (#9397)


--
[...truncated 6.86 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7240ca0, 
timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@495aee4c,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@495aee4c,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@734c91f3,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@734c91f3,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@24e4020d,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@24e4020d,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@3825b3d, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@3825b3d, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1db4ed8f,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1db4ed8f,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@a6034c3, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@a6034c3, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e10bbe8, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e10bbe8, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1fda52a4, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1fda52a4, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25380c45, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25380c45, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #154

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10583: Add documentation on the thread-safety of 
KafkaAdminClient (#9397)


--
[...truncated 6.86 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@295f2719, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3e25da4e, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3e25da4e, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4ceebc38, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4ceebc38, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@130fb1ab, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@130fb1ab, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@43b83b3c, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@43b83b3c, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@53df7eaf, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@53df7eaf, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6a690790, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6a690790, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@485719f, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@485719f, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6e6dbe8, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6e6dbe8, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2e64e79d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2e64e79d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@c933046, 
timestamped = false, caching = true, logging = true] 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #149

2020-10-19 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10583: Add documentation on the thread-safety of 
KafkaAdminClient (#9397)


--
[...truncated 3.40 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED


[GitHub] [kafka-site] miguno commented on pull request #305: Add the best talks from Kafka Summit 2020

2020-10-19 Thread GitBox


miguno commented on pull request #305:
URL: https://github.com/apache/kafka-site/pull/305#issuecomment-711980559


   HTML cleanup example.
   
   After cleanup:
   ```html
   
 https://www.confluent.io/kafka-summit-san-francisco-2019/eventing-things-a-netflix-original/;>Eventing
 Things  A Netflix Original!
 (https://kafka-summit.org/sessions/eventing-things-netflix-original/;>abstract),
 Nitin Sharma (Netflix), SFO 2019
   
   ```
   
   Before cleanup:
   ```html
   
 https://www.confluent.io/kafka-summit-san-francisco-2019/eventing-things-a-netflix-original/;
 >Eventing Things  A Netflix Original!(https://kafka-summit.org/sessions/eventing-things-netflix-original/;
 >abstract), Nitin Sharma (Netflix), SFO 2019
   
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on pull request #305: Add the best talks from Kafka Summit 2020

2020-10-19 Thread GitBox


miguno commented on pull request #305:
URL: https://github.com/apache/kafka-site/pull/305#issuecomment-711975561


   I tested the PR locally with `vnu` (HTML validation) and via 
https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno opened a new pull request #305: Add the best talks from Kafka Summit 2020

2020-10-19 Thread GitBox


miguno opened a new pull request #305:
URL: https://github.com/apache/kafka-site/pull/305


   This PR adds the thirty best Kafka Summit 2020 talks as rated by the 
community (summit attendees).
   
   To make maintenance of the Videos page easier, I also refactored (read: 
cleaned up) the HTML of the original page, which was very messy to deal with. I 
also added a table of contents as the page has now a length of a few screens.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New committer: David Jacot

2020-10-19 Thread Bruno Cadonna

David,

These are great news! Congrats!

Best,
Bruno

On 16.10.20 18:01, Gwen Shapira wrote:

The PMC for Apache Kafka has invited David Jacot as a committer, and
we are excited to say that he accepted!

David Jacot has been contributing to Apache Kafka since July 2015 (!)
and has been very active since August 2019. He contributed several
notable KIPs:

KIP-511: Collect and Expose Client Name and Version in Brokers
KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
KIP-570: Add leader epoch in StopReplicaReques
KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations
KIP-496 Added an API for the deletion of consumer offsets

In addition, David Jacot reviewed many community contributions and
showed great technical and architectural taste. Great reviews are hard
and often thankless work - but this is what makes Kafka a great
product and helps us grow our community.

Thanks for all the contributions, David! Looking forward to more
collaboration in the Apache Kafka community.



[jira] [Resolved] (KAFKA-10583) Thread-safety of AdminClient is not documented

2020-10-19 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10583.

Resolution: Fixed

> Thread-safety of AdminClient is not documented
> --
>
> Key: KAFKA-10583
> URL: https://issues.apache.org/jira/browse/KAFKA-10583
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.6.0
>Reporter: Adem Efe Gencer
>Assignee: Adem Efe Gencer
>Priority: Trivial
> Fix For: 2.8.0
>
>
> Other than a Stack Overflow comment (see 
> [https://stackoverflow.com/a/61738065]) by Colin Patrick McCabe and a 
> proposed design note on 
> [KIP-117|https://cwiki.apache.org/confluence/display/KAFKA/KIP-117%3A+Add+a+public+AdminClient+API+for+Kafka+admin+operations]
>  wiki, there is no source that verifies the thread-safety of KafkaAdminClient.
> Please update JavaDocs of KafkaAdminClient class and/or Admin interface to 
> clarify its thread-safety.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10499) 4 Unit Tests are breaking after addition of a new A record to "apache.org"

2020-10-19 Thread Dongjin Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjin Lee resolved KAFKA-10499.
-
Resolution: Fixed

Resolved with commit {{1443f24}} (see: 
https://github.com/apache/kafka/pull/9294)

> 4 Unit Tests are breaking after addition of a new A record to "apache.org"
> --
>
> Key: KAFKA-10499
> URL: https://issues.apache.org/jira/browse/KAFKA-10499
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.6.0
>Reporter: Prateek Agarwal
>Priority: Major
>
> {{apache.org}} earlier used to resolve only to 2 A records: 95.216.24.32 and 
> 40.79.78.1
>  
> With addition of a new A record 95.216.26.30, 4 unit tests have started 
> failing, which expect the count of DNS resolution to be 2, but instead it is 
> now 3.
>  
> {code:java}
> org.apache.kafka.clients.ClusterConnectionStatesTest > 
> testMultipleIPsWithUseAll FAILED
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kafka.clients.ClusterConnectionStatesTest.testMultipleIPsWithUseAll(ClusterConnectionStatesTest.java:241)
> org.apache.kafka.clients.ClusterConnectionStatesTest > testHostResolveChange 
> FAILED
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kafka.clients.ClusterConnectionStatesTest.testHostResolveChange(ClusterConnectionStatesTest.java:256)
> org.apache.kafka.clients.ClusterConnectionStatesTest > 
> testMultipleIPsWithDefault FAILED
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kafka.clients.ClusterConnectionStatesTest.testMultipleIPsWithDefault(ClusterConnectionStatesTest.java:231)
> org.apache.kafka.clients.ClientUtilsTest > testResolveDnsLookupAllIps FAILED
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:631)
> at 
> org.apache.kafka.clients.ClientUtilsTest.testResolveDnsLookupAllIps(ClientUtilsTest.java:87)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: David Jacot

2020-10-19 Thread Navinder Brar
Many Congratulations David.

Best Regards,Navinder

On Monday, 19 October, 2020, 12:53:43 pm IST, Dongjin Lee 
 wrote:  
 
 Congratulations, David!

Best,
Dongjin

On Mon, Oct 19, 2020 at 12:20 PM Hu Xi  wrote:

> Congrats, David! Well deserved!
>
>
> 
> 发件人: Vahid Hashemian 
> 发送时间: 2020年10月19日 11:17
> 收件人: dev 
> 主题: Re: [ANNOUNCE] New committer: David Jacot
>
> Congrats David!
>
> --Vahid
>
> On Sun, Oct 18, 2020 at 4:23 PM Satish Duggana 
> wrote:
>
> > Congratulations David!
> >
> > On Sat, Oct 17, 2020 at 10:46 AM Boyang Chen  >
> > wrote:
> > >
> > > Congrats David, well deserved!
> > >
> > > On Fri, Oct 16, 2020 at 6:45 PM John Roesler 
> > wrote:
> > >
> > > > Congratulations, David!
> > > > -John
> > > >
> > > > On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> > > > > Congrats, David!
> > > > >
> > > > > Konstantine
> > > > >
> > > > >
> > > > > On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Congratulations David!
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira 
> > > > wrote:
> > > > > >
> > > > > > > The PMC for Apache Kafka has invited David Jacot as a
> committer,
> > and
> > > > > > > we are excited to say that he accepted!
> > > > > > >
> > > > > > > David Jacot has been contributing to Apache Kafka since July
> > 2015 (!)
> > > > > > > and has been very active since August 2019. He contributed
> > several
> > > > > > > notable KIPs:
> > > > > > >
> > > > > > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > > > > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > > > > > KIP-570: Add leader epoch in StopReplicaReques
> > > > > > > KIP-599: Throttle Create Topic, Create Partition and Delete
> Topic
> > > > > > > Operations
> > > > > > > KIP-496 Added an API for the deletion of consumer offsets
> > > > > > >
> > > > > > > In addition, David Jacot reviewed many community contributions
> > and
> > > > > > > showed great technical and architectural taste. Great reviews
> are
> > > > hard
> > > > > > > and often thankless work - but this is what makes Kafka a great
> > > > > > > product and helps us grow our community.
> > > > > > >
> > > > > > > Thanks for all the contributions, David! Looking forward to
> more
> > > > > > > collaboration in the Apache Kafka community.
> > > > > > >
> > > > > > > --
> > > > > > > Gwen Shapira
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*  

Re: [ANNOUNCE] New committer: David Jacot

2020-10-19 Thread Dongjin Lee
Congratulations, David!

Best,
Dongjin

On Mon, Oct 19, 2020 at 12:20 PM Hu Xi  wrote:

> Congrats, David! Well deserved!
>
>
> 
> 发件人: Vahid Hashemian 
> 发送时间: 2020年10月19日 11:17
> 收件人: dev 
> 主题: Re: [ANNOUNCE] New committer: David Jacot
>
> Congrats David!
>
> --Vahid
>
> On Sun, Oct 18, 2020 at 4:23 PM Satish Duggana 
> wrote:
>
> > Congratulations David!
> >
> > On Sat, Oct 17, 2020 at 10:46 AM Boyang Chen  >
> > wrote:
> > >
> > > Congrats David, well deserved!
> > >
> > > On Fri, Oct 16, 2020 at 6:45 PM John Roesler 
> > wrote:
> > >
> > > > Congratulations, David!
> > > > -John
> > > >
> > > > On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> > > > > Congrats, David!
> > > > >
> > > > > Konstantine
> > > > >
> > > > >
> > > > > On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Congratulations David!
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira 
> > > > wrote:
> > > > > >
> > > > > > > The PMC for Apache Kafka has invited David Jacot as a
> committer,
> > and
> > > > > > > we are excited to say that he accepted!
> > > > > > >
> > > > > > > David Jacot has been contributing to Apache Kafka since July
> > 2015 (!)
> > > > > > > and has been very active since August 2019. He contributed
> > several
> > > > > > > notable KIPs:
> > > > > > >
> > > > > > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > > > > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > > > > > KIP-570: Add leader epoch in StopReplicaReques
> > > > > > > KIP-599: Throttle Create Topic, Create Partition and Delete
> Topic
> > > > > > > Operations
> > > > > > > KIP-496 Added an API for the deletion of consumer offsets
> > > > > > >
> > > > > > > In addition, David Jacot reviewed many community contributions
> > and
> > > > > > > showed great technical and architectural taste. Great reviews
> are
> > > > hard
> > > > > > > and often thankless work - but this is what makes Kafka a great
> > > > > > > product and helps us grow our community.
> > > > > > >
> > > > > > > Thanks for all the contributions, David! Looking forward to
> more
> > > > > > > collaboration in the Apache Kafka community.
> > > > > > >
> > > > > > > --
> > > > > > > Gwen Shapira
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [ANNOUNCE] New committer: David Jacot

2020-10-19 Thread Aparnesh Gaurav
Congrats David !! .

Regards,
Aparnesh


Virus-free.
www.avg.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Sun, Oct 18, 2020 at 8:20 PM Hu Xi  wrote:

> Congrats, David! Well deserved!
>
>
> 
> 发件人: Vahid Hashemian 
> 发送时间: 2020年10月19日 11:17
> 收件人: dev 
> 主题: Re: [ANNOUNCE] New committer: David Jacot
>
> Congrats David!
>
> --Vahid
>
> On Sun, Oct 18, 2020 at 4:23 PM Satish Duggana 
> wrote:
>
> > Congratulations David!
> >
> > On Sat, Oct 17, 2020 at 10:46 AM Boyang Chen  >
> > wrote:
> > >
> > > Congrats David, well deserved!
> > >
> > > On Fri, Oct 16, 2020 at 6:45 PM John Roesler 
> > wrote:
> > >
> > > > Congratulations, David!
> > > > -John
> > > >
> > > > On Fri, Oct 16, 2020, at 20:15, Konstantine Karantasis wrote:
> > > > > Congrats, David!
> > > > >
> > > > > Konstantine
> > > > >
> > > > >
> > > > > On Fri, Oct 16, 2020 at 3:36 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Congratulations David!
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Fri, Oct 16, 2020 at 9:01 AM Gwen Shapira 
> > > > wrote:
> > > > > >
> > > > > > > The PMC for Apache Kafka has invited David Jacot as a
> committer,
> > and
> > > > > > > we are excited to say that he accepted!
> > > > > > >
> > > > > > > David Jacot has been contributing to Apache Kafka since July
> > 2015 (!)
> > > > > > > and has been very active since August 2019. He contributed
> > several
> > > > > > > notable KIPs:
> > > > > > >
> > > > > > > KIP-511: Collect and Expose Client Name and Version in Brokers
> > > > > > > KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> > > > > > > KIP-570: Add leader epoch in StopReplicaReques
> > > > > > > KIP-599: Throttle Create Topic, Create Partition and Delete
> Topic
> > > > > > > Operations
> > > > > > > KIP-496 Added an API for the deletion of consumer offsets
> > > > > > >
> > > > > > > In addition, David Jacot reviewed many community contributions
> > and
> > > > > > > showed great technical and architectural taste. Great reviews
> are
> > > > hard
> > > > > > > and often thankless work - but this is what makes Kafka a great
> > > > > > > product and helps us grow our community.
> > > > > > >
> > > > > > > Thanks for all the contributions, David! Looking forward to
> more
> > > > > > > collaboration in the Apache Kafka community.
> > > > > > >
> > > > > > > --
> > > > > > > Gwen Shapira
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


-- 
Regards,
Aparnesh Gaurav