Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #130

2021-03-02 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #563

2021-03-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #533

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] y


--
[...truncated 7.31 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateManagerTest > testBasicIdMapping() STARTED


[jira] [Resolved] (KAFKA-12400) Upgrade jetty to fix CVE-2020-27223

2021-03-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12400.
---
Resolution: Fixed

Issue resolved by pull request 10245
[https://github.com/apache/kafka/pull/10245]

> Upgrade jetty to fix CVE-2020-27223
> ---
>
> Key: KAFKA-12400
> URL: https://issues.apache.org/jira/browse/KAFKA-12400
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 2.7.1, 2.6.2, 2.8.0
>
>
> h3. CVE-2020-27223 Detail
> In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive), 10.0.0, and 
> 11.0.0 when Jetty handles a request containing multiple Accept headers with a 
> large number of quality (i.e. q) parameters, the server may enter a denial of 
> service (DoS) state due to high CPU usage processing those quality values, 
> resulting in minutes of CPU time exhausted processing those quality values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #532

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12375: don't reuse thread.id until a thread has fully shut down 
(#10215)


--
[...truncated 7.31 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 

[jira] [Resolved] (KAFKA-12389) Upgrade of netty-codec due to CVE-2021-21290

2021-03-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-12389.
---
Fix Version/s: 2.8.0
   2.6.2
   2.7.1
   Resolution: Fixed

Issue resolved by pull request 10235
[https://github.com/apache/kafka/pull/10235]

> Upgrade of netty-codec due to CVE-2021-21290
> 
>
> Key: KAFKA-12389
> URL: https://issues.apache.org/jira/browse/KAFKA-12389
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Dominique Mongelli
>Assignee: Dongjin Lee
>Priority: Major
> Fix For: 2.7.1, 2.6.2, 2.8.0
>
>
> Our security tool raised the following security flaw on kafka 2.7: 
> [https://nvd.nist.gov/vuln/detail/CVE-2021-21290]
> It is a vulnerability related to jar *netty-codec-4.1.51.Final.jar*.
> Looking at source code, the netty-codec in trunk and 2.7.0 branches are still 
> vulnerable.
> Based on netty issue tracker, the vulnerability is fixed in 4.1.59.Final: 
> https://github.com/netty/netty/security/advisories/GHSA-5mcr-gq6c-3hq2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Kafka logos

2021-03-02 Thread Luke Chen
The 2nd one is with a subtitle: *Distributed publish-subscribe messaging
system*, which I believe is the old one.
The home page log with: *A Distributed Streaming Platform *should be the
newer one.

Please correct me if I'm wrong.
Thanks.
Luke

On Wed, Mar 3, 2021 at 6:42 AM Justin Mclean  wrote:

> HI,
>
> I notice that the Apache Kafka Logo on the Kafka home page [1] is
> different to what is listed here [2]. I'm creating some training material
> and was wondering what is the correct logo to use?
>
> Thanks,
> Justin
>
> 1. https://kafka.apache.org
> 2. http://apache.org/logos/?#kafka
>


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #49

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[A. Sophie Blee-Goldman] KAFKA-12375: don't reuse thread.id until a thread has 
fully shut down (#10215)


--
[...truncated 3.63 MB...]

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
PASSED

TopicDeletionManagerTest > testBrokerFailureAfterDeletionStarted() STARTED

TopicDeletionManagerTest > testBrokerFailureAfterDeletionStarted() PASSED

TopicDeletionManagerTest > testInitialization() STARTED

TopicDeletionManagerTest > testInitialization() PASSED

TopicDeletionManagerTest > testBasicDeletion() STARTED

TopicDeletionManagerTest > testBasicDeletion() PASSED

TopicDeletionManagerTest > testDeletionWithBrokerOffline() STARTED

TopicDeletionManagerTest > testDeletionWithBrokerOffline() PASSED

ControllerFailoverTest > testHandleIllegalStateException() STARTED

ControllerFailoverTest > testHandleIllegalStateException() PASSED

ZkNodeChangeNotificationListenerTest > testProcessNotification() STARTED

ZkNodeChangeNotificationListenerTest > testProcessNotification() PASSED

ZkNodeChangeNotificationListenerTest > testSwallowsProcessorException() STARTED

ZkNodeChangeNotificationListenerTest > testSwallowsProcessorException() PASSED

KafkaTest > testZookeeperKeyStorePassword() STARTED

KafkaTest > testZookeeperKeyStorePassword() PASSED

KafkaTest > testConnectionsMaxReauthMsExplicit() STARTED

KafkaTest > testConnectionsMaxReauthMsExplicit() PASSED

KafkaTest > testKafkaSslPasswordsWithSymbols() STARTED

KafkaTest > testKafkaSslPasswordsWithSymbols() PASSED

KafkaTest > testZkSslProtocol() STARTED

KafkaTest > testZkSslProtocol() PASSED

KafkaTest > testZkSslCipherSuites() STARTED

KafkaTest > testZkSslCipherSuites() PASSED

KafkaTest > testZkSslKeyStoreType() STARTED

KafkaTest > testZkSslKeyStoreType() PASSED

KafkaTest > testZkSslOcspEnable() STARTED

KafkaTest > testZkSslOcspEnable() PASSED

KafkaTest > testConnectionsMaxReauthMsDefault() STARTED

KafkaTest > testConnectionsMaxReauthMsDefault() PASSED

KafkaTest > testZkSslTrustStoreLocation() STARTED

KafkaTest > testZkSslTrustStoreLocation() PASSED

KafkaTest > testZkSslEnabledProtocols() STARTED

KafkaTest > testZkSslEnabledProtocols() PASSED

KafkaTest > testKafkaSslPasswords() STARTED

KafkaTest > testKafkaSslPasswords() PASSED

KafkaTest > testGetKafkaConfigFromArgs() STARTED

KafkaTest > testGetKafkaConfigFromArgs() PASSED

KafkaTest > testZkSslClientEnable() STARTED

KafkaTest > testZkSslClientEnable() PASSED

KafkaTest > testZookeeperTrustStorePassword() STARTED

KafkaTest > testZookeeperTrustStorePassword() PASSED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() STARTED

KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd() PASSED

KafkaTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #562

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12375: don't reuse thread.id until a thread has fully shut down 
(#10215)


--
[...truncated 3.65 MB...]

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@ba339d4, value = [B@3f683709), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@ba339d4, value = [B@3f683709), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@52c27d63, value = [B@74a565d7), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #588

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12375: don't reuse thread.id until a thread has fully shut down 
(#10215)


--
[...truncated 3.68 MB...]

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWildcardResourceDenyDominate() 
PASSED

AclAuthorizerTest > testEmptyAclThrowsException() STARTED

AclAuthorizerTest > testEmptyAclThrowsException() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeNoAclFoundOverride() PASSED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() STARTED

AclAuthorizerTest > testSuperUserWithCustomPrincipalHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #531

2021-03-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #561

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12369; Implement `ListTransactions` API (#10206)

[github] KAFKA-12177: apply log start offset retention before time and size 
based retention (#10216)


--
[...truncated 7.33 MB...]

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@368cff9e, value = [B@742c8944), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@368cff9e, value = [B@742c8944), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@60b1944a, value = [B@40f698ae), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@60b1944a, 

Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #48

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[Rajini Sivaram] KAFKA-12254: Ensure MM2 creates topics with source topic 
configs (#10217)


--
[...truncated 3.62 MB...]
LogManagerTest > testCheckpointForOnlyAffectedLogs() PASSED

LogManagerTest > testTimeBasedFlush() STARTED

LogManagerTest > testTimeBasedFlush() PASSED

LogManagerTest > testCreateLog() STARTED

LogManagerTest > testCreateLog() PASSED

LogManagerTest > testDoesntCleanLogsWithCompactPolicy() STARTED

LogManagerTest > testDoesntCleanLogsWithCompactPolicy() PASSED

LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash() STARTED

LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash() PASSED

LogManagerTest > testCreateAndDeleteOverlyLongTopic() STARTED

LogManagerTest > testCreateAndDeleteOverlyLongTopic() PASSED

LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy() STARTED

LogManagerTest > testDoesntCleanLogsWithCompactDeletePolicy() PASSED

LogManagerTest > testConfigChangesWithNoLogGettingInitialized() STARTED

LogManagerTest > testConfigChangesWithNoLogGettingInitialized() PASSED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE STARTED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE PASSED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP STARTED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP PASSED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD STARTED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD PASSED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE STARTED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE PASSED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP STARTED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP PASSED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE STARTED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE PASSED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP STARTED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP PASSED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD STARTED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD PASSED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE STARTED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE PASSED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP STARTED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP PASSED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD STARTED

LogCleanerParameterizedIntegrationTest > [5] codec=ZSTD PASSED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE STARTED

LogCleanerParameterizedIntegrationTest > [1] codec=NONE PASSED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP STARTED

LogCleanerParameterizedIntegrationTest > [2] codec=GZIP PASSED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerParameterizedIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerParameterizedIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerLagIntegrationTest > [1] codec=NONE STARTED

LogCleanerLagIntegrationTest > [1] codec=NONE PASSED

LogCleanerLagIntegrationTest > [2] codec=GZIP STARTED

LogCleanerLagIntegrationTest > [2] codec=GZIP PASSED

LogCleanerLagIntegrationTest > [3] codec=SNAPPY STARTED

LogCleanerLagIntegrationTest > [3] codec=SNAPPY PASSED

LogCleanerLagIntegrationTest > [4] codec=LZ4 STARTED

LogCleanerLagIntegrationTest > [4] codec=LZ4 PASSED

LogCleanerLagIntegrationTest > [5] codec=ZSTD STARTED

LogCleanerLagIntegrationTest > [5] codec=ZSTD PASSED

LogConfigTest > testGetConfigValue() STARTED

LogConfigTest > testGetConfigValue() PASSED

LogConfigTest > testToRst() STARTED

LogConfigTest > testToRst() PASSED

LogConfigTest > ensureNoStaticInitializationOrderDependency() STARTED

LogConfigTest > 

[jira] [Reopened] (KAFKA-10251) Flaky Test kafka.api.TransactionsBounceTest.testWithGroupMetadata

2021-03-02 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman reopened KAFKA-10251:


Looks like it's still failing – saw at least one test failure on a build which 
was kicked off since merging this PR. 
[https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10215/5/testReport/junit/kafka.api/TransactionsBounceTest/Build___JDK_8___testWithGroupId__/?cloudbees-analytics-link=scm-reporting%2Ftests%2Ffailed]

 

Reopening the ticket for further investigation

> Flaky Test kafka.api.TransactionsBounceTest.testWithGroupMetadata
> -
>
> Key: KAFKA-10251
> URL: https://issues.apache.org/jira/browse/KAFKA-10251
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: A. Sophie Blee-Goldman
>Assignee: Luke Chen
>Priority: Major
>
> h3. Stacktrace
> org.scalatest.exceptions.TestFailedException: Consumed 0 records before 
> timeout instead of the expected 200 records at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530) at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529) 
> at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389) 
> at org.scalatest.Assertions.fail(Assertions.scala:1091) at 
> org.scalatest.Assertions.fail$(Assertions.scala:1087) at 
> org.scalatest.Assertions$.fail(Assertions.scala:1389) at 
> kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:842) at 
> kafka.api.TransactionsBounceTest.testWithGroupMetadata(TransactionsBounceTest.scala:109)
>  
>  
> The logs are pretty much just this on repeat:
> {code:java}
> [2020-07-08 23:41:04,034] ERROR Error when sending message to topic 
> output-topic with key: 9955, value: 9955 with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback:52) 
> org.apache.kafka.common.KafkaException: Failing batch since transaction was 
> aborted at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:423)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313) 
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240) at 
> java.lang.Thread.run(Thread.java:748) [2020-07-08 23:41:04,034] ERROR Error 
> when sending message to topic output-topic with key: 9959, value: 9959 with 
> error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback:52) 
> org.apache.kafka.common.KafkaException: Failing batch since transaction was 
> aborted at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:423)
>  at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:313) 
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240) at 
> java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #587

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10251: increase timeout for consuming records (#10228)

[github] KAFKA-10357: Extract setup of changelog from Streams partition 
assignor (#10163)

[github] KAFKA-12369; Implement `ListTransactions` API (#10206)

[github] KAFKA-12177: apply log start offset retention before time and size 
based retention (#10216)


--
[...truncated 3.68 MB...]
CommandLineUtilsTest > testParseArgs() STARTED

CommandLineUtilsTest > testParseArgs() PASSED

CommandLineUtilsTest > testParseArgsWithMultipleDelimiters() STARTED

CommandLineUtilsTest > testParseArgsWithMultipleDelimiters() PASSED

CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist() STARTED

CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist() PASSED

CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter() STARTED

CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter() PASSED

CommandLineUtilsTest > testMaybeMergeOptionsDefaultOverwriteExisting() STARTED

CommandLineUtilsTest > testMaybeMergeOptionsDefaultOverwriteExisting() PASSED

CommandLineUtilsTest > testParseEmptyArgAsValid() STARTED

CommandLineUtilsTest > testParseEmptyArgAsValid() PASSED

CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting() STARTED

CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting() PASSED

JsonTest > testParseToWithInvalidJson() STARTED

JsonTest > testParseToWithInvalidJson() PASSED

JsonTest > testParseTo() STARTED

JsonTest > testParseTo() PASSED

JsonTest > testJsonParse() STARTED

JsonTest > testJsonParse() PASSED

JsonTest > testEncodeAsBytes() STARTED

JsonTest > testEncodeAsBytes() PASSED

JsonTest > testEncodeAsString() STARTED

JsonTest > testEncodeAsString() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

PasswordEncoderTest > testEncoderConfigChange() STARTED

PasswordEncoderTest > testEncoderConfigChange() PASSED

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ToolsUtilsTest > testIntegerMetric() STARTED

ToolsUtilsTest > testIntegerMetric() PASSED

ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart() STARTED

ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart() PASSED

JsonValueTest > testJsonObjectIterator() STARTED

JsonValueTest > testJsonObjectIterator() PASSED

JsonValueTest > testDecodeLong() STARTED

JsonValueTest > testDecodeLong() PASSED

JsonValueTest > testAsJsonObject() STARTED

JsonValueTest > testAsJsonObject() PASSED

JsonValueTest > testDecodeDouble() STARTED

JsonValueTest > testDecodeDouble() PASSED

JsonValueTest > testDecodeOption() STARTED

JsonValueTest > testDecodeOption() PASSED

JsonValueTest > testDecodeString() STARTED

JsonValueTest > testDecodeString() PASSED

JsonValueTest > testJsonValueToString() STARTED

JsonValueTest > testJsonValueToString() PASSED

JsonValueTest > testAsJsonObjectOption() STARTED

JsonValueTest > testAsJsonObjectOption() PASSED

JsonValueTest > testAsJsonArrayOption() STARTED

JsonValueTest > testAsJsonArrayOption() PASSED

JsonValueTest > testAsJsonArray() STARTED

JsonValueTest > testAsJsonArray() PASSED

JsonValueTest > testJsonValueHashCode() STARTED

JsonValueTest > testJsonValueHashCode() PASSED

JsonValueTest > testDecodeInt() STARTED

JsonValueTest > testDecodeInt() PASSED

JsonValueTest > testDecodeMap() STARTED

JsonValueTest > testDecodeMap() PASSED

JsonValueTest > testDecodeSeq() STARTED

JsonValueTest > testDecodeSeq() PASSED

JsonValueTest > testJsonObjectGet() STARTED

JsonValueTest > testJsonObjectGet() PASSED

JsonValueTest > testJsonValueEquals() STARTED

JsonValueTest > testJsonValueEquals() PASSED

JsonValueTest > testJsonArrayIterator() STARTED

JsonValueTest > testJsonArrayIterator() PASSED

JsonValueTest > testJsonObjectApply() STARTED

JsonValueTest > testJsonObjectApply() PASSED

JsonValueTest > testDecodeBoolean() STARTED

JsonValueTest > testDecodeBoolean() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

QuotaUtilsTest > testBoundedThrottleTimeObservedRateAboveQuotaAboveLimit() 
STARTED

QuotaUtilsTest > testBoundedThrottleTimeObservedRateAboveQuotaAboveLimit() 
PASSED

QuotaUtilsTest > testThrottleTimeObservedRateBelowQuota() STARTED

QuotaUtilsTest > testThrottleTimeObservedRateBelowQuota() PASSED

QuotaUtilsTest > testBoundedThrottleTimeObservedRateBelowQuota() STARTED

QuotaUtilsTest > testBoundedThrottleTimeObservedRateBelowQuota() PASSED

QuotaUtilsTest > 
testBoundedThrottleTimeThrowsExceptionIfProvidedNonRateMetric() STARTED

QuotaUtilsTest > 
testBoundedThrottleTimeThrowsExceptionIfProvidedNonRateMetric() PASSED

QuotaUtilsTest > 

Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #47

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[Jason Gustafson] KAFKA-12394; Return `TOPIC_AUTHORIZATION_FAILED` in delete 
topic response if no describe permission (#10223)


--
[...truncated 3.62 MB...]
PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testReassignPartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
STARTED

PartitionLeaderElectionAlgorithmsTest > testOfflinePartitionLeaderElection() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() STARTED

PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr() PASSED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive() 
PASSED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
STARTED

PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled() 
PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithAlreadyDefinedDeletedPartition() PASSED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestIsNew() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicQueuedForDeletion() PASSED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testLeaderAndIsrRequestSentToLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testStopReplicaInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testStopReplicaSentOnlyToLiveAndShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() STARTED

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED


Kafka logos

2021-03-02 Thread Justin Mclean
HI,

I notice that the Apache Kafka Logo on the Kafka home page [1] is different to 
what is listed here [2]. I'm creating some training material and was wondering 
what is the correct logo to use? 

Thanks,
Justin

1. https://kafka.apache.org
2. http://apache.org/logos/?#kafka


Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-03-02 Thread Dhruvil Shah
Thanks, John. The fix for KAFKA-12254 is now merged into 2.8.

On Tue, Mar 2, 2021 at 11:54 AM John Roesler  wrote:

> Hi Dhruvil,
>
> Thanks for this fix. I agree it would be good to get it in
> for 2.8.0, so I have added it to the fix versions in KAFKA-
> 12254.
>
> Please go ahead and cherry-pick your fix onto the 2.8
> branch.
>
> Thanks!
> -John
>
> On Mon, 2021-03-01 at 09:36 -0800, Dhruvil Shah wrote:
> > Hi John,
> >
> > I would like to bring up
> https://issues.apache.org/jira/browse/KAFKA-12254
> > as a blocker candidate for 2.8.0. While this is not a regression, the
> > issue could lead to data loss in certain cases. The fix is trivial so it
> > may be worth bringing it into 2.8.0. Let me know what you think.
> >
> > - Dhruvil
> >
> > On Mon, Feb 22, 2021 at 7:50 AM John Roesler 
> wrote:
> >
> > > Thanks for the heads-up, Chia-Ping,
> > >
> > > I agree it would be good to include that fix.
> > >
> > > Thanks,
> > > John
> > >
> > > On Mon, 2021-02-22 at 09:48 +, Chia-Ping Tsai wrote:
> > > > hi John,
> > > >
> > > > There is a PR (https://github.com/apache/kafka/pull/10024) fixing
> > > following test error.
> > > >
> > > > 14:00:28 Execution failed for task ':core:test'.
> > > > 14:00:28 > Process 'Gradle Test Executor 24' finished with non-zero
> exit
> > > value 1
> > > > 14:00:28   This problem might be caused by incorrect test process
> > > configuration.
> > > > 14:00:28   Please refer to the test execution section in the User
> Manual
> > > at
> > > >
> > > > This error obstructs us from running integration tests so I'd like to
> > > push it to 2.8 branch after it gets approved.
> > > >
> > > > Best Regards,
> > > > Chia-Ping
> > > >
> > > > On 2021/02/18 16:23:13, "John Roesler"  wrote:
> > > > > Hello again, all.
> > > > >
> > > > > This is a notice that we are now in Code Freeze for the 2.8 branch.
> > > > >
> > > > > From now until the release, only fixes for blockers should be
> merged
> > > to the release branch. Fixes for failing tests are allowed and
> encouraged.
> > > Documentation-only commits are also ok, in case you have forgotten to
> > > update the docs for some features in 2.8.0.
> > > > >
> > > > > Once we have a green build and passing system tests, I will cut the
> > > first RC.
> > > > >
> > > > > Thank you,
> > > > > John
> > > > >
> > > > > On Sun, Feb 7, 2021, at 09:59, John Roesler wrote:
> > > > > > Hello all,
> > > > > >
> > > > > > I have just cut the branch for 2.8 and sent the notification
> > > > > > email to the dev mailing list.
> > > > > >
> > > > > > As a reminder, the next checkpoint toward the 2.8.0 release
> > > > > > is Code Freeze on Feb 17th.
> > > > > >
> > > > > > To ensure a high-quality release, we should now focus our
> > > > > > efforts on stabilizing the 2.8 branch, including resolving
> > > > > > failures, writing new tests, and fixing documentation.
> > > > > >
> > > > > > Thanks as always for your contributions,
> > > > > > John
> > > > > >
> > > > > >
> > > > > > On Wed, 2021-02-03 at 14:18 -0600, John Roesler wrote:
> > > > > > > Hello again, all,
> > > > > > >
> > > > > > > This is a reminder that today is the Feature Freeze
> > > > > > > deadline. To avoid any last-minute crunch or time-zone
> > > > > > > unfairness, I'll cut the branch toward the end of the week.
> > > > > > >
> > > > > > > Please wrap up your features and transition fully into a
> > > > > > > stabilization mode. The next checkpoint is Code Freeze on
> > > > > > > Feb 17th.
> > > > > > >
> > > > > > > Thanks as always for all of your contributions,
> > > > > > > John
> > > > > > >
> > > > > > > On Wed, 2021-01-27 at 12:17 -0600, John Roesler wrote:
> > > > > > > > Hello again, all.
> > > > > > > >
> > > > > > > > This is a reminder that *today* is the KIP freeze for Apache
> > > > > > > > Kafka 2.8.0.
> > > > > > > >
> > > > > > > > The next checkpoint is the Feature Freeze on Feb 3rd.
> > > > > > > >
> > > > > > > > When considering any last-minute KIPs today, please be
> > > > > > > > mindful of the scope, since we have only one week to merge a
> > > > > > > > stable implementation of the KIP.
> > > > > > > >
> > > > > > > > For those whose KIPs have been accepted already, please work
> > > > > > > > closely with your reviewers so that your features can be
> > > > > > > > merged in a stable form in before the Feb 3rd cutoff. Also,
> > > > > > > > don't forget to update the documentation as part of your
> > > > > > > > feature.
> > > > > > > >
> > > > > > > > Finally, as a gentle reminder to all contributors. There
> > > > > > > > seems to have been a recent increase in test and system test
> > > > > > > > failures. Please take some time starting now to stabilize
> > > > > > > > the codebase so we can ensure a high quality and timely
> > > > > > > > 2.8.0 release!
> > > > > > > >
> > > > > > > > Thanks to all of you for your contributions,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Sat, 2021-01-23 at 18:15 +0300, Ivan Ponomarev wrote:
> > > > > > > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #530

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10251: increase timeout for consuming records (#10228)

[github] KAFKA-10357: Extract setup of changelog from Streams partition 
assignor (#10163)


--
[...truncated 3.65 MB...]

TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData() STARTED

TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData() PASSED

TransactionsTest > testSendOffsetsToTransactionTimeout() STARTED

TransactionsTest > testSendOffsetsToTransactionTimeout() PASSED

TransactionsTest > testFailureToFenceEpoch() STARTED

TransactionsTest > testFailureToFenceEpoch() PASSED

TransactionsTest > testFencingOnSend() STARTED

TransactionsTest > testFencingOnSend() PASSED

TransactionsTest > testFencingOnCommit() STARTED

TransactionsTest > testFencingOnCommit() PASSED

TransactionsTest > testAbortTransactionTimeout() STARTED

TransactionsTest > testAbortTransactionTimeout() PASSED

TransactionsTest > testMultipleMarkersOneLeader() STARTED

TransactionsTest > testMultipleMarkersOneLeader() PASSED

TransactionsTest > testCommitTransactionTimeout() STARTED

TransactionsTest > testCommitTransactionTimeout() PASSED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess() PASSED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testProducerWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure() 
PASSED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() STARTED

SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure() PASSED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
STARTED

SaslClientsWithInvalidCredentialsTest > testConsumerWithAuthenticationFailure() 
PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota() PASSED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() STARTED

UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled() PASSED

UserClientIdQuotaTest > testThrottledProducerConsumer() STARTED

UserClientIdQuotaTest > testThrottledProducerConsumer() PASSED

UserClientIdQuotaTest > testQuotaOverrideDelete() STARTED

UserClientIdQuotaTest > testQuotaOverrideDelete() PASSED

UserClientIdQuotaTest > testThrottledRequest() STARTED

UserClientIdQuotaTest > testThrottledRequest() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED

ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED

ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED

ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED

ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED

ZooKeeperClientTest > testConnection() STARTED

ZooKeeperClientTest > testConnection() PASSED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED

ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED

ZooKeeperClientTest > testGetAclExistingZNode() STARTED

ZooKeeperClientTest > testGetAclExistingZNode() PASSED

ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED

ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED

ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED

ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED

ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED

ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED

ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED

ZooKeeperClientTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #560

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10251: increase timeout for consuming records (#10228)

[github] KAFKA-10357: Extract setup of changelog from Streams partition 
assignor (#10163)


--
[...truncated 3.68 MB...]
AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() STARTED

AclAuthorizerTest > testAllowAccessWithCustomPrincipal() PASSED

AclAuthorizerTest > testDeleteAclOnWildcardResource() STARTED

AclAuthorizerTest > testDeleteAclOnWildcardResource() PASSED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() STARTED

AclAuthorizerTest > testAuthorizerZkConfigFromKafkaConfig() PASSED

AclAuthorizerTest > testChangeListenerTiming() STARTED

AclAuthorizerTest > testChangeListenerTiming() PASSED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 STARTED

AclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions()
 PASSED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() STARTED

AclAuthorizerTest > testAuthorzeByResourceTypeSuperUserHasAccess() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
STARTED

AclAuthorizerTest > testAuthorizeByResourceTypePrefixedResourceDenyDominate() 
PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeMultipleAddAndRemove() PASSED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() STARTED

AclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnPrefixedResource() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeDenyTakesPrecedence() PASSED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() STARTED

AclAuthorizerTest > testHighConcurrencyModificationOfResourceAcls() PASSED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() STARTED

AclAuthorizerTest > testAuthorizeByResourceTypeWithAllPrincipalAce() PASSED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() STARTED

AclAuthorizerTest > testAuthorizeWithEmptyResourceName() PASSED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() STARTED

AclAuthorizerTest > testAuthorizeThrowsOnNonLiteralResource() PASSED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() STARTED

AclAuthorizerTest > testDeleteAllAclOnPrefixedResource() PASSED

AclAuthorizerTest > testAddAclsOnLiteralResource() STARTED

AclAuthorizerTest > testAddAclsOnLiteralResource() PASSED

AclAuthorizerTest > testGetAclsPrincipal() STARTED

AclAuthorizerTest > testGetAclsPrincipal() PASSED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() STARTED

AclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet() PASSED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() 
STARTED

AclAuthorizerTest > testAccessAllowedIfAllowAclExistsOnWildcardResource() PASSED

AclAuthorizerTest > testLoadCache() STARTED

AclAuthorizerTest > testLoadCache() PASSED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorizeByResourceTypeWithAllHostAce() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeIsolationUnrelatedDenyWontDominateAllow() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWildcardResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeWithAllOperationAce() PASSED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
STARTED

AuthorizerInterfaceDefaultTest > testAuthorzeByResourceTypeSuperUserHasAccess() 
PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() STARTED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypePrefixedResourceDenyDominate() PASSED

AuthorizerInterfaceDefaultTest > 
testAuthorizeByResourceTypeMultipleAddAndRemove() STARTED

AuthorizerInterfaceDefaultTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #586

2021-03-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #46

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[Guozhang Wang] KAFKA-12289: Adding test cases for prefix scan in 
InMemoryKeyValueStore (#10052)

[Guozhang Wang] KAFKA-10766: Unit test cases for RocksDBRangeIterator (#9717)


--
[...truncated 7.24 MB...]

FetchSessionTest > testLastFetchedEpoch() STARTED

FetchSessionTest > testLastFetchedEpoch() PASSED

FetchSessionTest > testFetchSessionExpiration() STARTED

FetchSessionTest > testFetchSessionExpiration() PASSED

FetchSessionTest > testZeroSizeFetchSession() STARTED

FetchSessionTest > testZeroSizeFetchSession() PASSED

FetchSessionTest > testNewSessionId() STARTED

FetchSessionTest > testNewSessionId() PASSED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() STARTED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() PASSED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() STARTED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() PASSED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
STARTED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
PASSED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() STARTED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() STARTED

PartitionTest > testIsrNotExpandedIfUpdateFails() PASSED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() STARTED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() PASSED

PartitionTest > testAddAndRemoveMetrics() STARTED

PartitionTest > testAddAndRemoveMetrics() PASSED

PartitionTest > testListOffsetIsolationLevels() STARTED

PartitionTest > testListOffsetIsolationLevels() PASSED

PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset() STARTED

PartitionTest > testAppendRecordsAsFollowerBelowLogStartOffset() PASSED

PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch() STARTED

PartitionTest > testFetchLatestOffsetIncludesLeaderEpoch() PASSED

PartitionTest > testUnderReplicatedPartitionsCorrectSemantics() STARTED

PartitionTest > testUnderReplicatedPartitionsCorrectSemantics() PASSED

PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetSnapshotEpochValidationForFollower() PASSED

PartitionTest > testMaybeShrinkIsr() STARTED

PartitionTest > testMaybeShrinkIsr() PASSED

PartitionTest > testLogConfigNotDirty() STARTED

PartitionTest > testLogConfigNotDirty() PASSED

PartitionTest > testMonotonicOffsetsAfterLeaderChange() STARTED

PartitionTest > testMonotonicOffsetsAfterLeaderChange() PASSED

PartitionTest > testUpdateAssignmentAndIsr() STARTED

PartitionTest > testUpdateAssignmentAndIsr() PASSED

PartitionTest > testMakeFollowerWithNoLeaderIdChange() STARTED

PartitionTest > testMakeFollowerWithNoLeaderIdChange() PASSED

PartitionTest > testAppendRecordsToFollowerWithNoReplicaThrowsException() 
STARTED

PartitionTest > testAppendRecordsToFollowerWithNoReplicaThrowsException() PASSED

PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch() 
STARTED

PartitionTest > 
testFollowerDoesNotJoinISRUntilCaughtUpToOffsetWithinCurrentLeaderEpoch() PASSED

PartitionTest > testSingleInFlightAlterIsr() STARTED

PartitionTest > testSingleInFlightAlterIsr() PASSED

PartitionTest > testShouldNotShrinkIsrIfFollowerCaughtUpToLogEnd() STARTED

PartitionTest > testShouldNotShrinkIsrIfFollowerCaughtUpToLogEnd() PASSED

PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader() STARTED

PartitionTest > testFetchOffsetSnapshotEpochValidationForLeader() PASSED

PartitionTest > testOffsetForLeaderEpochValidationForLeader() STARTED

PartitionTest > testOffsetForLeaderEpochValidationForLeader() PASSED

PartitionTest > testAtMinIsr() STARTED

PartitionTest > testAtMinIsr() PASSED

PartitionTest > testAlterIsrInvalidVersion() STARTED

PartitionTest > testAlterIsrInvalidVersion() PASSED

PartitionTest > testOffsetForLeaderEpochValidationForFollower() STARTED

PartitionTest > testOffsetForLeaderEpochValidationForFollower() PASSED

PartitionTest > testMakeLeaderUpdatesEpochCache() STARTED


[jira] [Resolved] (KAFKA-12177) Retention is not idempotent

2021-03-02 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-12177.
-
Fix Version/s: 3.0.0
 Assignee: Lucas Bradstreet
   Resolution: Fixed

merged the PR to trunk

> Retention is not idempotent
> ---
>
> Key: KAFKA-12177
> URL: https://issues.apache.org/jira/browse/KAFKA-12177
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Lucas Bradstreet
>Assignee: Lucas Bradstreet
>Priority: Minor
> Fix For: 3.0.0
>
>
> Kafka today applies retention in the following order:
>  # Time
>  # Size
>  # Log start offset
> Today it is possible for a segment with offsets less than the log start 
> offset to contain data that is not deletable due to time retention. This 
> means that it's possible for log start offset retention to unblock further 
> deletions as a result of time based retention. Note that this does require a 
> case where the max timestamp for each segment increases, decreases and then 
> increases again. Even so it would be nice to make retention idempotent by 
> applying log start offset retention first, followed by size and time. This 
> would also be potentially cheaper to perform as neither log start offset and 
> size retention require the maxTimestamp for a segment to be loaded from disk 
> after a broker restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12369) Implement ListTransactions API

2021-03-02 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12369.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Implement ListTransactions API
> --
>
> Key: KAFKA-12369
> URL: https://issues.apache.org/jira/browse/KAFKA-12369
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 3.0.0
>
>
> This tracks the implementation of the `ListTransactions` API documented by 
> KIP-664: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-664%3A+Provide+tooling+to+detect+and+abort+hanging+transactions.
>  This API is similar to `ListGroups` for consumer groups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #559

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12289: Adding test cases for prefix scan in 
InMemoryKeyValueStore (#10052)

[github] KAFKA-10766: Unit test cases for RocksDBRangeIterator (#9717)

[github] MINOR: Disable transactional/idempotent system tests for Raft quorums 
(#10224)

[github] KAFKA-12394; Return `TOPIC_AUTHORIZATION_FAILED` in delete topic 
response if no describe permission (#10223)


--
[...truncated 3.68 MB...]
PlaintextConsumerTest > testFetchHonoursFetchSizeIfLargeRecordNotFirst() PASSED

PlaintextConsumerTest > testSeek() STARTED

PlaintextConsumerTest > testSeek() PASSED

PlaintextConsumerTest > testConsumingWithNullGroupId() STARTED

PlaintextConsumerTest > testConsumingWithNullGroupId() PASSED

PlaintextConsumerTest > testPositionAndCommit() STARTED

PlaintextConsumerTest > testPositionAndCommit() PASSED

PlaintextConsumerTest > testFetchRecordLargerThanMaxPartitionFetchBytes() 
STARTED

PlaintextConsumerTest > testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED

PlaintextConsumerTest > testUnsubscribeTopic() STARTED

PlaintextConsumerTest > testUnsubscribeTopic() PASSED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose() STARTED

PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose() PASSED

PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes() STARTED

PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes() PASSED

PlaintextConsumerTest > testMultiConsumerDefaultAssignment() STARTED

PlaintextConsumerTest > testMultiConsumerDefaultAssignment() PASSED

PlaintextConsumerTest > testAutoCommitOnClose() STARTED

PlaintextConsumerTest > testAutoCommitOnClose() PASSED

PlaintextConsumerTest > testListTopics() STARTED

PlaintextConsumerTest > testListTopics() PASSED

PlaintextConsumerTest > testExpandingTopicSubscriptions() STARTED

PlaintextConsumerTest > testExpandingTopicSubscriptions() PASSED

PlaintextConsumerTest > testInterceptors() STARTED

PlaintextConsumerTest > testInterceptors() PASSED

PlaintextConsumerTest > testConsumingWithEmptyGroupId() STARTED

PlaintextConsumerTest > testConsumingWithEmptyGroupId() PASSED

PlaintextConsumerTest > testPatternUnsubscription() STARTED

PlaintextConsumerTest > testPatternUnsubscription() PASSED

PlaintextConsumerTest > testGroupConsumption() STARTED

PlaintextConsumerTest > testGroupConsumption() PASSED

PlaintextConsumerTest > testPartitionsFor() STARTED

PlaintextConsumerTest > testPartitionsFor() PASSED

PlaintextConsumerTest > testAutoCommitOnRebalance() STARTED

PlaintextConsumerTest > testAutoCommitOnRebalance() PASSED

PlaintextConsumerTest > testInterceptorsWithWrongKeyValue() STARTED

PlaintextConsumerTest > testInterceptorsWithWrongKeyValue() PASSED

PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords() STARTED

PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords() PASSED

PlaintextConsumerTest > testHeaders() STARTED

PlaintextConsumerTest > testHeaders() PASSED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment() STARTED

PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment() PASSED

PlaintextConsumerTest > testHeadersSerializerDeserializer() STARTED

PlaintextConsumerTest > testHeadersSerializerDeserializer() PASSED

PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment() STARTED

PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment() PASSED

PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment() STARTED

PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment() PASSED

PlaintextConsumerTest > testPartitionPauseAndResume() STARTED

PlaintextConsumerTest > testPartitionPauseAndResume() PASSED

PlaintextConsumerTest > testQuotaMetricsNotCreatedIfNoQuotasConfigured() STARTED

PlaintextConsumerTest > testQuotaMetricsNotCreatedIfNoQuotasConfigured() PASSED

PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithSubscribe() STARTED

PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithSubscribe() PASSED

PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime() STARTED

PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime() PASSED

PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted() STARTED

PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted() PASSED

PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup() STARTED

PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup() PASSED

PlaintextConsumerTest > testMaxPollRecords() STARTED

PlaintextConsumerTest > testMaxPollRecords() PASSED

PlaintextConsumerTest > testAutoOffsetReset() STARTED

PlaintextConsumerTest > testAutoOffsetReset() PASSED

PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords() STARTED

PlaintextConsumerTest > testPerPartitionLagWithMaxPollRecords() PASSED

PlaintextConsumerTest > testFetchInvalidOffset() STARTED

PlaintextConsumerTest > 

Re: [VOTE] KIP-715: Expose Committed offset in streams

2021-03-02 Thread Walker Carlson
Thank you everyone for voting. This KIP has passed with:

+4 binding votes (Boyang, Sophie, Guozhang and Matthias)
+2 non-binding (Leah and myself)

best,
Walker

On Tue, Mar 2, 2021 at 11:18 AM Matthias J. Sax  wrote:

> +1 (binding)
>
> On 3/1/21 4:23 PM, Guozhang Wang wrote:
> > Thanks Walker for the updated KIP, +1 (binding)
> >
> >
> > Guozhang
> >
> > On Mon, Mar 1, 2021 at 3:47 PM Sophie Blee-Goldman 
> > wrote:
> >
> >> Thanks for the KIP! +1 (binding)
> >>
> >> Sophie
> >>
> >> On Mon, Mar 1, 2021 at 10:04 AM Leah Thomas 
> wrote:
> >>
> >>> Hey Walker,
> >>>
> >>> Thanks for leading this discussion. +1 from me, non-binding
> >>>
> >>> Leah
> >>>
> >>> On Mon, Mar 1, 2021 at 12:37 AM Boyang Chen <
> reluctanthero...@gmail.com>
> >>> wrote:
> >>>
>  Thanks Walker for the proposal, +1 (binding) from me.
> 
>  On Fri, Feb 26, 2021 at 12:42 PM Walker Carlson <
> wcarl...@confluent.io
> >>>
>  wrote:
> 
> > Hello all,
> >
> > I would like to bring KIP-715 to a vote. Here is the KIP:
> > https://cwiki.apache.org/confluence/x/aRRRCg.
> >
> > Walker
> >
> 
> >>>
> >>
> >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #529

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12289: Adding test cases for prefix scan in 
InMemoryKeyValueStore (#10052)

[github] KAFKA-10766: Unit test cases for RocksDBRangeIterator (#9717)

[github] MINOR: Disable transactional/idempotent system tests for Raft quorums 
(#10224)

[github] KAFKA-12394; Return `TOPIC_AUTHORIZATION_FAILED` in delete topic 
response if no describe permission (#10223)


--
[...truncated 3.65 MB...]

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() STARTED

LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed() 
PASSED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() STARTED

LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed() PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED


[jira] [Created] (KAFKA-12403) Broker handling of delete topic events

2021-03-02 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12403:
---

 Summary: Broker handling of delete topic events
 Key: KAFKA-12403
 URL: https://issues.apache.org/jira/browse/KAFKA-12403
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


This issue tracks completion of metadata listener support for the topic 
deletion event. When a topic is deleted, the broker needs to stop replicas, 
delete log data, and remove cached topic configurations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-03-02 Thread John Roesler
Hi Dhruvil,

Thanks for this fix. I agree it would be good to get it in
for 2.8.0, so I have added it to the fix versions in KAFKA-
12254.

Please go ahead and cherry-pick your fix onto the 2.8
branch.

Thanks!
-John

On Mon, 2021-03-01 at 09:36 -0800, Dhruvil Shah wrote:
> Hi John,
> 
> I would like to bring up https://issues.apache.org/jira/browse/KAFKA-12254
> as a blocker candidate for 2.8.0. While this is not a regression, the
> issue could lead to data loss in certain cases. The fix is trivial so it
> may be worth bringing it into 2.8.0. Let me know what you think.
> 
> - Dhruvil
> 
> On Mon, Feb 22, 2021 at 7:50 AM John Roesler  wrote:
> 
> > Thanks for the heads-up, Chia-Ping,
> > 
> > I agree it would be good to include that fix.
> > 
> > Thanks,
> > John
> > 
> > On Mon, 2021-02-22 at 09:48 +, Chia-Ping Tsai wrote:
> > > hi John,
> > > 
> > > There is a PR (https://github.com/apache/kafka/pull/10024) fixing
> > following test error.
> > > 
> > > 14:00:28 Execution failed for task ':core:test'.
> > > 14:00:28 > Process 'Gradle Test Executor 24' finished with non-zero exit
> > value 1
> > > 14:00:28   This problem might be caused by incorrect test process
> > configuration.
> > > 14:00:28   Please refer to the test execution section in the User Manual
> > at
> > > 
> > > This error obstructs us from running integration tests so I'd like to
> > push it to 2.8 branch after it gets approved.
> > > 
> > > Best Regards,
> > > Chia-Ping
> > > 
> > > On 2021/02/18 16:23:13, "John Roesler"  wrote:
> > > > Hello again, all.
> > > > 
> > > > This is a notice that we are now in Code Freeze for the 2.8 branch.
> > > > 
> > > > From now until the release, only fixes for blockers should be merged
> > to the release branch. Fixes for failing tests are allowed and encouraged.
> > Documentation-only commits are also ok, in case you have forgotten to
> > update the docs for some features in 2.8.0.
> > > > 
> > > > Once we have a green build and passing system tests, I will cut the
> > first RC.
> > > > 
> > > > Thank you,
> > > > John
> > > > 
> > > > On Sun, Feb 7, 2021, at 09:59, John Roesler wrote:
> > > > > Hello all,
> > > > > 
> > > > > I have just cut the branch for 2.8 and sent the notification
> > > > > email to the dev mailing list.
> > > > > 
> > > > > As a reminder, the next checkpoint toward the 2.8.0 release
> > > > > is Code Freeze on Feb 17th.
> > > > > 
> > > > > To ensure a high-quality release, we should now focus our
> > > > > efforts on stabilizing the 2.8 branch, including resolving
> > > > > failures, writing new tests, and fixing documentation.
> > > > > 
> > > > > Thanks as always for your contributions,
> > > > > John
> > > > > 
> > > > > 
> > > > > On Wed, 2021-02-03 at 14:18 -0600, John Roesler wrote:
> > > > > > Hello again, all,
> > > > > > 
> > > > > > This is a reminder that today is the Feature Freeze
> > > > > > deadline. To avoid any last-minute crunch or time-zone
> > > > > > unfairness, I'll cut the branch toward the end of the week.
> > > > > > 
> > > > > > Please wrap up your features and transition fully into a
> > > > > > stabilization mode. The next checkpoint is Code Freeze on
> > > > > > Feb 17th.
> > > > > > 
> > > > > > Thanks as always for all of your contributions,
> > > > > > John
> > > > > > 
> > > > > > On Wed, 2021-01-27 at 12:17 -0600, John Roesler wrote:
> > > > > > > Hello again, all.
> > > > > > > 
> > > > > > > This is a reminder that *today* is the KIP freeze for Apache
> > > > > > > Kafka 2.8.0.
> > > > > > > 
> > > > > > > The next checkpoint is the Feature Freeze on Feb 3rd.
> > > > > > > 
> > > > > > > When considering any last-minute KIPs today, please be
> > > > > > > mindful of the scope, since we have only one week to merge a
> > > > > > > stable implementation of the KIP.
> > > > > > > 
> > > > > > > For those whose KIPs have been accepted already, please work
> > > > > > > closely with your reviewers so that your features can be
> > > > > > > merged in a stable form in before the Feb 3rd cutoff. Also,
> > > > > > > don't forget to update the documentation as part of your
> > > > > > > feature.
> > > > > > > 
> > > > > > > Finally, as a gentle reminder to all contributors. There
> > > > > > > seems to have been a recent increase in test and system test
> > > > > > > failures. Please take some time starting now to stabilize
> > > > > > > the codebase so we can ensure a high quality and timely
> > > > > > > 2.8.0 release!
> > > > > > > 
> > > > > > > Thanks to all of you for your contributions,
> > > > > > > John
> > > > > > > 
> > > > > > > On Sat, 2021-01-23 at 18:15 +0300, Ivan Ponomarev wrote:
> > > > > > > > Hi John,
> > > > > > > > 
> > > > > > > > KIP-418 is already implemented and reviewed, but I don't see
> > it in the
> > > > > > > > release plan. Can it be added?
> > > > > > > > 
> > > > > > > > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-418%3A+A+method-chaining+way+to+branch+KStream
> > > > > > > > 
> > > > > > > > 

Re: [VOTE] KIP-715: Expose Committed offset in streams

2021-03-02 Thread Matthias J. Sax
+1 (binding)

On 3/1/21 4:23 PM, Guozhang Wang wrote:
> Thanks Walker for the updated KIP, +1 (binding)
> 
> 
> Guozhang
> 
> On Mon, Mar 1, 2021 at 3:47 PM Sophie Blee-Goldman 
> wrote:
> 
>> Thanks for the KIP! +1 (binding)
>>
>> Sophie
>>
>> On Mon, Mar 1, 2021 at 10:04 AM Leah Thomas  wrote:
>>
>>> Hey Walker,
>>>
>>> Thanks for leading this discussion. +1 from me, non-binding
>>>
>>> Leah
>>>
>>> On Mon, Mar 1, 2021 at 12:37 AM Boyang Chen 
>>> wrote:
>>>
 Thanks Walker for the proposal, +1 (binding) from me.

 On Fri, Feb 26, 2021 at 12:42 PM Walker Carlson >>
 wrote:

> Hello all,
>
> I would like to bring KIP-715 to a vote. Here is the KIP:
> https://cwiki.apache.org/confluence/x/aRRRCg.
>
> Walker
>

>>>
>>
> 
> 


Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-02 Thread Matthias J. Sax
I think calling it endOffset is still fine.

We should keep it "simple" for users and not introduce too many concepts.


-Matthias

On 3/2/21 8:14 AM, Walker Carlson wrote:
> Okay we can document that if the state is rebalancing that a Task could be
> between instances and so no show up for one localThreadMetadata call. but
> this should not cause a problem for repeated calls
> 
> Bruno, to your questions. The endOffset is like the consumer's
> highWatermark and does not require a remote call. It seems his name is
> confusing and I should change the name from endOffset to HighWatermark to
> match the consumer.
> 
> walker
> 
> On Tue, Mar 2, 2021 at 4:43 AM Bruno Cadonna  wrote:
> 
>> Hi Walker,
>>
>> Thank you for the KIP!
>>
>> I somehow agree that we should document that some tasks may be missing.
>>
>> I have one question/comment. As far as I understand, your KIP adds two
>> methods that return data that is actually hosted on the brokers, namely
>> committedOffsets() and endOffsets(). Thus, we need a remote call to
>> fetch the data and consequently the cost of calling
>> localThreadMetaData() might increase substantially. I understand, that
>> for committedOffsets(), we could avoid the remote call by maintaining
>> the committedOffsets() locally, but can we also avoid the remote call
>> for endOffsets()? Should we allow users to pass a parameter to
>> localThreadMetaData() that skips the metadata that needs remote calls to
>> keep the costs for use cases that do not need the end offsets low?
>>
>> Best,
>> Bruno
>>
>> On 02.03.21 02:18, Matthias J. Sax wrote:
 but the user should
 not rely on all tasks being returned at any given time to begin with
>> since
 it's possible we are in between revoking and re-assigning a partition.
>>>
>>> Exactly. That is what I meant: the "hand off" phase of partitions during
>>> a rebalance. During this phase, some tasks are "missing" if you
>>> aggregate the information globally. My point was (even if it might be
>>> obvious to us) that it seems to be worth pointing out in the KIPs and in
>>> the docs.
>>>
>>> I meant "partial information" from a global POV (not partial for a
>>> single local instance).
>>>
 Also I mention that they return the highest value they had seen
 so far for any tasks they have assigned to them.
>>>
>>> For the shutdown case maybe, but after a task is closed its metadata
>>> should not be returned any longer IMHO.
>>>
>>>
>>> -Matthias
>>>
>>> On 3/1/21 4:46 PM, Walker Carlson wrote:
 I updated to use Optional, good idea Mathias.

 For the localThreadMetadata, it could already be called running a
 rebalance. Also I mention that they return the highest value they had
>> seen
 so far for any tasks they have assigned to them. I thought it would be
 useful to see the TaskMetadata while the Threads were shutting down. I
 think that there shouldn't really be partial information. If you think
>> this
 should be clarified better let me know.

 walker

 On Mon, Mar 1, 2021 at 3:45 PM Sophie Blee-Goldman >>
 wrote:

> Can you clarify your second question Matthias? If this is queried
>> during
> a cooperative rebalance, it should return the tasks as usual. If the
>> user
> is
> using eager rebalancing then this will not return any tasks, but the
>> user
> should
> not rely on all tasks being returned at any given time to begin with
>> since
> it's
> possible we are in between revoking and re-assigning a partition.
>
> What does "partial information" mean?
>
> (btw I agree that an Optional makes sense for
>> timeCurrentIdlingStarted())
>
> On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax 
>> wrote:
>
>> Thanks the updating the KIP Walker.
>>
>> About, `timeCurrentIdlingStarted()`: should we return an `Optional`
>> instead of `-1` if a task is not idling.
>>
>>
>> As we allow to call `localThreadMetadata()` any time, could it be that
>> we report partial information during a rebalance? If yes, this should
>> be
>> pointed out, because if one want to implement a health check this
>> needs
>> to be taken into account.
>>
>> -Matthias
>>
>>
>> On 2/27/21 11:32 AM, Walker Carlson wrote:
>>> Sure thing Boyang,
>>>
>>> 1) it is in proposed changes. I expanded on it a bit more now.
>>> 2) done
>>> 3) and done :)
>>>
>>> thanks for the suggestions,
>>> walker
>>>
>>> On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen <
> reluctanthero...@gmail.com>
>>> wrote:
>>>
 Thanks Walker. Some minor comments:

 1. Could you add a reference to localThreadMetadata method in the
>> KIP?
 2. Could you make the code block as a java template, such that
 TaskMetadata.java could be as the template title? Also it would be
> good
>> to
 add some meta comments about the newly added functions.

[jira] [Created] (KAFKA-12402) client_sasl_mechanism should be an explicit list instead of a .csv string

2021-03-02 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-12402:
-

 Summary: client_sasl_mechanism should be an explicit list instead 
of a .csv string
 Key: KAFKA-12402
 URL: https://issues.apache.org/jira/browse/KAFKA-12402
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Ron Dagostino


The SecurityConfig and the KafkaService system test classes both accept a 
client_sasl_mechanism parameter.  This is typically a single value (e.g. 
PLAIN), but DelegationTokenTest sets self.kafka.client_sasl_mechanism = 
'GSSAPI,SCRAM-SHA-256'.  If we need to support a list of mechanisms then the 
parameter should be an explicit list instead of a .csv string.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12394) Consider topic id existence and authorization errors

2021-03-02 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12394.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Consider topic id existence and authorization errors
> 
>
> Key: KAFKA-12394
> URL: https://issues.apache.org/jira/browse/KAFKA-12394
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 2.8.0
>
>
> We have historically had logic in the api layer to avoid leaking the 
> existence or non-existence of topics to clients which are not authorized to 
> describe them. The way we have done this is to always authorize the topic 
> name first before checking existence.
> Topic ids make this more difficult because the resource (ie the topic name) 
> has to be derived. This means we have to check existence of the topic first. 
> If the topic does not exist, then our hands are tied and we have to return 
> UNKNOWN_TOPIC_ID. If the topic does exist, then we need to check if the 
> client is authorized to describe it. The question comes then what we should 
> do if the client is not authorized?
> The current behavior is to return UNKNOWN_TOPIC_ID. The downside is that this 
> is misleading and forces the client to retry even though they are doomed to 
> hit the same error. However, the client should generally handle this by 
> requesting Metadata using the topic name that they are interested in, which 
> would give them a chance to see the topic authorization error. Basically the 
> fact that you need describe permission in the first place to discover the 
> topic id makes this an unlikely scenario.
> There is an argument to be made for TOPIC_AUTHORIZATION_FAILED as well. 
> Basically we could take the stance that we do not care about leaking the 
> existence of topic IDs since they do not reveal anything about the underlying 
> topic. Additionally, there is little likelihood of a user discovering a valid 
> UUID by accident or even through brute force. The benefit of this is that 
> users get a clear error for cases where a topic Id may have been discovered 
> through some external means. For example, an administrator finds a topic ID 
> in the logging and attempts to delete it using the new `deleteTopicsWithIds` 
> Admin API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12401) Flaky Test FeatureCommandTest#testUpgradeAllFeaturesSuccess

2021-03-02 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-12401:
---

 Summary: Flaky Test 
FeatureCommandTest#testUpgradeAllFeaturesSuccess
 Key: KAFKA-12401
 URL: https://issues.apache.org/jira/browse/KAFKA-12401
 Project: Kafka
  Issue Type: Test
  Components: admin, unit tests
Reporter: Matthias J. Sax


{quote}kafka.admin.UpdateFeaturesException: 2 feature updates failed! at 
kafka.admin.FeatureApis.maybeApplyFeatureUpdates(FeatureCommand.scala:289) at 
kafka.admin.FeatureApis.upgradeAllFeatures(FeatureCommand.scala:191) at 
kafka.admin.FeatureCommandTest.$anonfun$testUpgradeAllFeaturesSuccess$3(FeatureCommandTest.scala:134){quote}
STDOUT
{quote}[Add] Feature: feature_1 ExistingFinalizedMaxVersion: - 
NewFinalizedMaxVersion: 3 Result: OK [Add] Feature: feature_2 
ExistingFinalizedMaxVersion: - NewFinalizedMaxVersion: 5 Result: OK [Add] 
Feature: feature_1 ExistingFinalizedMaxVersion: - NewFinalizedMaxVersion: 3 
Result: OK [Add] Feature: feature_2 ExistingFinalizedMaxVersion: - 
NewFinalizedMaxVersion: 5 Result: OK{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #528

2021-03-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #585

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12268: Implement task idling semantics via currentLag API 
(#10137)


--
[...truncated 3.67 MB...]

SocketServerTest > remoteCloseSendFailure() STARTED

SocketServerTest > remoteCloseSendFailure() PASSED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
STARTED

SocketServerTest > testBrokerSendAfterChannelClosedUpdatesRequestMetrics() 
PASSED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
STARTED

SocketServerTest > 
testControlPlaneTakePrecedenceOverInterBrokerListenerAsPrivilegedListener() 
PASSED

SocketServerTest > testNoOpAction() STARTED

SocketServerTest > testNoOpAction() PASSED

SocketServerTest > simpleRequest() STARTED

SocketServerTest > simpleRequest() PASSED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() STARTED

SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress() PASSED

SocketServerTest > testIdleConnection() STARTED

SocketServerTest > testIdleConnection() PASSED

SocketServerTest > remoteCloseWithoutBufferedReceives() STARTED

SocketServerTest > remoteCloseWithoutBufferedReceives() PASSED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() 
STARTED

SocketServerTest > remoteCloseWithCompleteAndIncompleteBufferedReceives() PASSED

SocketServerTest > testZeroMaxConnectionsPerIp() STARTED

SocketServerTest > testZeroMaxConnectionsPerIp() PASSED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() STARTED

SocketServerTest > testClientInformationWithLatestApiVersionsRequest() PASSED

SocketServerTest > testMetricCollectionAfterShutdown() STARTED

SocketServerTest > testMetricCollectionAfterShutdown() PASSED

SocketServerTest > testSessionPrincipal() STARTED

SocketServerTest > testSessionPrincipal() PASSED

SocketServerTest > configureNewConnectionException() STARTED

SocketServerTest > configureNewConnectionException() PASSED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
STARTED

SocketServerTest > testSaslReauthenticationFailureWithKip152SaslAuthenticate() 
PASSED

SocketServerTest > testMaxConnectionsPerIpOverrides() STARTED

SocketServerTest > testMaxConnectionsPerIpOverrides() PASSED

SocketServerTest > processNewResponseException() STARTED

SocketServerTest > processNewResponseException() PASSED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() STARTED

SocketServerTest > remoteCloseWithIncompleteBufferedReceive() PASSED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() STARTED

SocketServerTest > testStagedListenerShutdownWhenConnectionQueueIsFull() PASSED

SocketServerTest > testStagedListenerStartup() STARTED

SocketServerTest > testStagedListenerStartup() PASSED

SocketServerTest > testConnectionRateLimit() STARTED

SocketServerTest > testConnectionRateLimit() PASSED

SocketServerTest > testConnectionRatePerIp() STARTED

SocketServerTest > testConnectionRatePerIp() PASSED

SocketServerTest > processCompletedSendException() STARTED

SocketServerTest > processCompletedSendException() PASSED

SocketServerTest > processDisconnectedException() STARTED

SocketServerTest > processDisconnectedException() PASSED

SocketServerTest > closingChannelWithBufferedReceives() STARTED

SocketServerTest > closingChannelWithBufferedReceives() PASSED

SocketServerTest > sendCancelledKeyException() STARTED

SocketServerTest > sendCancelledKeyException() PASSED

SocketServerTest > processCompletedReceiveException() STARTED

SocketServerTest > processCompletedReceiveException() PASSED

SocketServerTest > testControlPlaneAsPrivilegedListener() STARTED

SocketServerTest > testControlPlaneAsPrivilegedListener() PASSED

SocketServerTest > closingChannelSendFailure() STARTED

SocketServerTest > closingChannelSendFailure() PASSED

SocketServerTest > idleExpiryWithBufferedReceives() STARTED

SocketServerTest > idleExpiryWithBufferedReceives() PASSED

SocketServerTest > testSocketsCloseOnShutdown() STARTED

SocketServerTest > testSocketsCloseOnShutdown() PASSED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() STARTED

SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone() PASSED

SocketServerTest > pollException() STARTED

SocketServerTest > pollException() PASSED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() STARTED

SocketServerTest > closingChannelWithBufferedReceivesFailedSend() PASSED

SocketServerTest > remoteCloseWithBufferedReceives() STARTED

SocketServerTest > remoteCloseWithBufferedReceives() PASSED

SocketServerTest > testThrottledSocketsClosedOnShutdown() STARTED

SocketServerTest > testThrottledSocketsClosedOnShutdown() PASSED

SocketServerTest > 

Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-02 Thread Walker Carlson
Okay we can document that if the state is rebalancing that a Task could be
between instances and so no show up for one localThreadMetadata call. but
this should not cause a problem for repeated calls

Bruno, to your questions. The endOffset is like the consumer's
highWatermark and does not require a remote call. It seems his name is
confusing and I should change the name from endOffset to HighWatermark to
match the consumer.

walker

On Tue, Mar 2, 2021 at 4:43 AM Bruno Cadonna  wrote:

> Hi Walker,
>
> Thank you for the KIP!
>
> I somehow agree that we should document that some tasks may be missing.
>
> I have one question/comment. As far as I understand, your KIP adds two
> methods that return data that is actually hosted on the brokers, namely
> committedOffsets() and endOffsets(). Thus, we need a remote call to
> fetch the data and consequently the cost of calling
> localThreadMetaData() might increase substantially. I understand, that
> for committedOffsets(), we could avoid the remote call by maintaining
> the committedOffsets() locally, but can we also avoid the remote call
> for endOffsets()? Should we allow users to pass a parameter to
> localThreadMetaData() that skips the metadata that needs remote calls to
> keep the costs for use cases that do not need the end offsets low?
>
> Best,
> Bruno
>
> On 02.03.21 02:18, Matthias J. Sax wrote:
> >> but the user should
> >> not rely on all tasks being returned at any given time to begin with
> since
> >> it's possible we are in between revoking and re-assigning a partition.
> >
> > Exactly. That is what I meant: the "hand off" phase of partitions during
> > a rebalance. During this phase, some tasks are "missing" if you
> > aggregate the information globally. My point was (even if it might be
> > obvious to us) that it seems to be worth pointing out in the KIPs and in
> > the docs.
> >
> > I meant "partial information" from a global POV (not partial for a
> > single local instance).
> >
> >> Also I mention that they return the highest value they had seen
> >> so far for any tasks they have assigned to them.
> >
> > For the shutdown case maybe, but after a task is closed its metadata
> > should not be returned any longer IMHO.
> >
> >
> > -Matthias
> >
> > On 3/1/21 4:46 PM, Walker Carlson wrote:
> >> I updated to use Optional, good idea Mathias.
> >>
> >> For the localThreadMetadata, it could already be called running a
> >> rebalance. Also I mention that they return the highest value they had
> seen
> >> so far for any tasks they have assigned to them. I thought it would be
> >> useful to see the TaskMetadata while the Threads were shutting down. I
> >> think that there shouldn't really be partial information. If you think
> this
> >> should be clarified better let me know.
> >>
> >> walker
> >>
> >> On Mon, Mar 1, 2021 at 3:45 PM Sophie Blee-Goldman  >
> >> wrote:
> >>
> >>> Can you clarify your second question Matthias? If this is queried
> during
> >>> a cooperative rebalance, it should return the tasks as usual. If the
> user
> >>> is
> >>> using eager rebalancing then this will not return any tasks, but the
> user
> >>> should
> >>> not rely on all tasks being returned at any given time to begin with
> since
> >>> it's
> >>> possible we are in between revoking and re-assigning a partition.
> >>>
> >>> What does "partial information" mean?
> >>>
> >>> (btw I agree that an Optional makes sense for
> timeCurrentIdlingStarted())
> >>>
> >>> On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax 
> wrote:
> >>>
>  Thanks the updating the KIP Walker.
> 
>  About, `timeCurrentIdlingStarted()`: should we return an `Optional`
>  instead of `-1` if a task is not idling.
> 
> 
>  As we allow to call `localThreadMetadata()` any time, could it be that
>  we report partial information during a rebalance? If yes, this should
> be
>  pointed out, because if one want to implement a health check this
> needs
>  to be taken into account.
> 
>  -Matthias
> 
> 
>  On 2/27/21 11:32 AM, Walker Carlson wrote:
> > Sure thing Boyang,
> >
> > 1) it is in proposed changes. I expanded on it a bit more now.
> > 2) done
> > 3) and done :)
> >
> > thanks for the suggestions,
> > walker
> >
> > On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen <
> >>> reluctanthero...@gmail.com>
> > wrote:
> >
> >> Thanks Walker. Some minor comments:
> >>
> >> 1. Could you add a reference to localThreadMetadata method in the
> KIP?
> >> 2. Could you make the code block as a java template, such that
> >> TaskMetadata.java could be as the template title? Also it would be
> >>> good
>  to
> >> add some meta comments about the newly added functions.
> >> 3. Could you write more details about rejected alternatives? Just as
>  why we
> >> don't choose to expose as metrics, and how a new method on KStream
> is
>  not
> >> favorable. These would be valuable when 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #558

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12268: Implement task idling semantics via currentLag API 
(#10137)


--
[...truncated 3.65 MB...]

ControllerChannelManagerTest > testStopReplicaGroupsByBroker() PASSED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataDoesNotIncludePartitionsWithoutLeaderAndIsr() PASSED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
STARTED

ControllerChannelManagerTest > testMixedDeleteAndNotDeleteStopReplicaRequests() 
PASSED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
STARTED

ControllerChannelManagerTest > testLeaderAndIsrInterBrokerProtocolVersion() 
PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@77adbccc, value = [B@318e345c), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@77adbccc, value = [B@318e345c), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@449bbe0b, value = [B@39d656ba), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 

[GitHub] [kafka-site] miguno commented on pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


miguno commented on pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#issuecomment-789009875


   PR updated with reviewer feedback.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


miguno commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585572446



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.
+  
+
+  Configuring 
Topics: Data Retention And More
+
+  
+Kafka's configuration is very flexible due to its fine granularity, and it 
supports a plethora of per-topic configuration 
settings to help administrators set up multi-tenant clusters. For example, 
administrators often need to define data retention policies to control how much 
and/or for how long data will be stored in a topic, with settings such as retention.bytes (size) and retention.ms (time). This limits storage consumption 
within the cluster, and helps complying with legal requirements such as GDPR.
+  
+
+  Securing Clusters and 
Topics: Authentication, Authorization, Encryption
+
+  
+  Because the documentation has a dedicated chapter on security that applies to any Kafka deployment, this 
section focuses on additional considerations for multi-tenant environments.
+  
+
+  
+Security settings for Kafka fall into three main 

[GitHub] [kafka-site] miguno commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


miguno commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585640800



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.

Review comment:
   Ack and updated.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


miguno commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585575690



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.
+  
+
+  Configuring 
Topics: Data Retention And More
+
+  
+Kafka's configuration is very flexible due to its fine granularity, and it 
supports a plethora of per-topic configuration 
settings to help administrators set up multi-tenant clusters. For example, 
administrators often need to define data retention policies to control how much 
and/or for how long data will be stored in a topic, with settings such as retention.bytes (size) and retention.ms (time). This limits storage consumption 
within the cluster, and helps complying with legal requirements such as GDPR.
+  
+
+  Securing Clusters and 
Topics: Authentication, Authorization, Encryption
+
+  
+  Because the documentation has a dedicated chapter on security that applies to any Kafka deployment, this 
section focuses on additional considerations for multi-tenant environments.
+  
+
+  
+Security settings for Kafka fall into three main 

[GitHub] [kafka-site] miguno commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


miguno commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585572446



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.
+  
+
+  Configuring 
Topics: Data Retention And More
+
+  
+Kafka's configuration is very flexible due to its fine granularity, and it 
supports a plethora of per-topic configuration 
settings to help administrators set up multi-tenant clusters. For example, 
administrators often need to define data retention policies to control how much 
and/or for how long data will be stored in a topic, with settings such as retention.bytes (size) and retention.ms (time). This limits storage consumption 
within the cluster, and helps complying with legal requirements such as GDPR.
+  
+
+  Securing Clusters and 
Topics: Authentication, Authorization, Encryption
+
+  
+  Because the documentation has a dedicated chapter on security that applies to any Kafka deployment, this 
section focuses on additional considerations for multi-tenant environments.
+  
+
+  
+Security settings for Kafka fall into three main 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #557

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Format the revoking active log output in 
`StreamsPartitionAssignor` (#10242)

[github] MINOR: correct the error message of validating uint32 (#10193)

[github] MINOR: Time and log producer state recovery phases (#10241)


--
[...truncated 7.32 MB...]
ControllerChannelManagerTest > testUpdateMetadataRequestSent() PASSED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
STARTED

ControllerChannelManagerTest > testUpdateMetadataRequestDuringTopicDeletion() 
PASSED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() STARTED

ControllerChannelManagerTest > 
testUpdateMetadataIncludesLiveOrShuttingDownBrokers() PASSED

ControllerChannelManagerTest > testStopReplicaRequestSent() STARTED

ControllerChannelManagerTest > testStopReplicaRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted() PASSED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() STARTED

ControllerChannelManagerTest > testLeaderAndIsrRequestSent() PASSED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() STARTED

ControllerChannelManagerTest > 
testStopReplicaRequestWithoutDeletePartitionWhileTopicDeletionStarted() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidFeatures() PASSED

FeatureZNodeTest > testEncodeDecode() STARTED

FeatureZNodeTest > testEncodeDecode() PASSED

FeatureZNodeTest > testDecodeSuccess() STARTED

FeatureZNodeTest > testDecodeSuccess() PASSED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() STARTED

FeatureZNodeTest > testDecodeFailOnInvalidVersionAndStatus() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPaths() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPaths() PASSED

ExtendedAclStoreTest > shouldRoundTripChangeNode() STARTED

ExtendedAclStoreTest > shouldRoundTripChangeNode() PASSED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() STARTED

ExtendedAclStoreTest > shouldThrowFromEncodeOnLiteral() PASSED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() STARTED

ExtendedAclStoreTest > shouldThrowIfConstructedWithLiteral() PASSED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

ExtendedAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() STARTED

ExtendedAclStoreTest > shouldHaveCorrectPatternType() PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@77b8f51c, value = [B@74ef711f), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@77b8f51c, value = [B@74ef711f), properties=Map(print.value -> false), 
expected= PASSED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@401549b1, value = [B@45661a60), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 STARTED

DefaultMessageFormatterTest > [2] name=print key, record=ConsumerRecord(topic = 
someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 1234, 
serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), key = 
[B@401549b1, value = [B@45661a60), properties=Map(print.key -> true, 
print.value -> false), expected=someKey
 PASSED

DefaultMessageFormatterTest > [3] name=print value, record=ConsumerRecord(topic 
= someTopic, partition = 9, leaderEpoch = null, offset = 9876, CreateTime = 
1234, serialized key size = 0, serialized value size = 0, headers = 
RecordHeaders(headers = [RecordHeader(key = h1, value = [118, 49]), 
RecordHeader(key = h2, value = [118, 50])], isReadOnly = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #584

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Time and log producer state recovery phases (#10241)


--
[...truncated 3.65 MB...]

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() PASSED

ProducerFailureHandlingTest > testWrongBrokerList() STARTED

ProducerFailureHandlingTest > testWrongBrokerList() PASSED

ProducerFailureHandlingTest > testNotEnoughReplicas() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicas() PASSED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNonExistentTopic() STARTED

ProducerFailureHandlingTest > testNonExistentTopic() PASSED

ProducerFailureHandlingTest > testInvalidPartition() STARTED

ProducerFailureHandlingTest > testInvalidPartition() PASSED

ProducerFailureHandlingTest > testSendAfterClosed() STARTED

ProducerFailureHandlingTest > testSendAfterClosed() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() PASSED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() PASSED

ApiVersionTest > testApiVersionUniqueIds() STARTED

ApiVersionTest > testApiVersionUniqueIds() PASSED

ApiVersionTest > testMinSupportedVersionFor() STARTED

ApiVersionTest > testMinSupportedVersionFor() PASSED

ApiVersionTest > testShortVersion() STARTED

ApiVersionTest > testShortVersion() PASSED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > testApply() STARTED

ApiVersionTest > testApply() PASSED

ApiVersionTest > testMetadataQuorumApisAreDisabled() STARTED

ApiVersionTest > testMetadataQuorumApisAreDisabled() PASSED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
STARTED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
PASSED

ApiVersionTest > testApiVersionValidator() STARTED

ApiVersionTest > testApiVersionValidator() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@2b772fcf, value = [B@60303e1a), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, 

Re: [DISCUSS] KIP-715: Expose Committed offset in streams

2021-03-02 Thread Bruno Cadonna

Hi Walker,

Thank you for the KIP!

I somehow agree that we should document that some tasks may be missing.

I have one question/comment. As far as I understand, your KIP adds two 
methods that return data that is actually hosted on the brokers, namely 
committedOffsets() and endOffsets(). Thus, we need a remote call to 
fetch the data and consequently the cost of calling 
localThreadMetaData() might increase substantially. I understand, that 
for committedOffsets(), we could avoid the remote call by maintaining 
the committedOffsets() locally, but can we also avoid the remote call 
for endOffsets()? Should we allow users to pass a parameter to 
localThreadMetaData() that skips the metadata that needs remote calls to 
keep the costs for use cases that do not need the end offsets low?


Best,
Bruno

On 02.03.21 02:18, Matthias J. Sax wrote:

but the user should
not rely on all tasks being returned at any given time to begin with since
it's possible we are in between revoking and re-assigning a partition.


Exactly. That is what I meant: the "hand off" phase of partitions during
a rebalance. During this phase, some tasks are "missing" if you
aggregate the information globally. My point was (even if it might be
obvious to us) that it seems to be worth pointing out in the KIPs and in
the docs.

I meant "partial information" from a global POV (not partial for a
single local instance).


Also I mention that they return the highest value they had seen
so far for any tasks they have assigned to them.


For the shutdown case maybe, but after a task is closed its metadata
should not be returned any longer IMHO.


-Matthias

On 3/1/21 4:46 PM, Walker Carlson wrote:

I updated to use Optional, good idea Mathias.

For the localThreadMetadata, it could already be called running a
rebalance. Also I mention that they return the highest value they had seen
so far for any tasks they have assigned to them. I thought it would be
useful to see the TaskMetadata while the Threads were shutting down. I
think that there shouldn't really be partial information. If you think this
should be clarified better let me know.

walker

On Mon, Mar 1, 2021 at 3:45 PM Sophie Blee-Goldman 
wrote:


Can you clarify your second question Matthias? If this is queried during
a cooperative rebalance, it should return the tasks as usual. If the user
is
using eager rebalancing then this will not return any tasks, but the user
should
not rely on all tasks being returned at any given time to begin with since
it's
possible we are in between revoking and re-assigning a partition.

What does "partial information" mean?

(btw I agree that an Optional makes sense for timeCurrentIdlingStarted())

On Mon, Mar 1, 2021 at 11:46 AM Matthias J. Sax  wrote:


Thanks the updating the KIP Walker.

About, `timeCurrentIdlingStarted()`: should we return an `Optional`
instead of `-1` if a task is not idling.


As we allow to call `localThreadMetadata()` any time, could it be that
we report partial information during a rebalance? If yes, this should be
pointed out, because if one want to implement a health check this needs
to be taken into account.

-Matthias


On 2/27/21 11:32 AM, Walker Carlson wrote:

Sure thing Boyang,

1) it is in proposed changes. I expanded on it a bit more now.
2) done
3) and done :)

thanks for the suggestions,
walker

On Fri, Feb 26, 2021 at 3:10 PM Boyang Chen <

reluctanthero...@gmail.com>

wrote:


Thanks Walker. Some minor comments:

1. Could you add a reference to localThreadMetadata method in the KIP?
2. Could you make the code block as a java template, such that
TaskMetadata.java could be as the template title? Also it would be

good

to

add some meta comments about the newly added functions.
3. Could you write more details about rejected alternatives? Just as

why we

don't choose to expose as metrics, and how a new method on KStream is

not

favorable. These would be valuable when we look back on our design
decisions.

On Fri, Feb 26, 2021 at 11:23 AM Walker Carlson <

wcarl...@confluent.io>

wrote:


I understand now. I think that is a valid concern but I think it is

best

solved but having an external service verify through streams. As this

KIP

is now just adding fields to TaskMetadata to be returned in the
threadMetadata I am going to say that is out of scope.

That seems to be the last concern. If there are no others I will put

this

up for a vote soon.

walker

On Thu, Feb 25, 2021 at 12:35 PM Boyang Chen <

reluctanthero...@gmail.com


wrote:


For the 3rd point, yes, what I'm proposing is an edge case. For

example,

when we have 4 tasks [0_0, 0_1, 1_0, 1_1], and a bug in rebalancing

logic

causing no one gets 1_1 assigned. Then the health check service will

only

see 3 tasks [0_0, 0_1, 1_0] reporting progress normally while not

paying

attention to 1_1. What I want to expose is a "logical global" view

of

all

the tasks through the stream instance, since each instance gets the
assigned topology and should be able to 

[GitHub] [kafka-site] rajinisivaram commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


rajinisivaram commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585530580



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.
+  
+
+  Configuring 
Topics: Data Retention And More
+
+  
+Kafka's configuration is very flexible due to its fine granularity, and it 
supports a plethora of per-topic configuration 
settings to help administrators set up multi-tenant clusters. For example, 
administrators often need to define data retention policies to control how much 
and/or for how long data will be stored in a topic, with settings such as retention.bytes (size) and retention.ms (time). This limits storage consumption 
within the cluster, and helps complying with legal requirements such as GDPR.
+  
+
+  Securing Clusters and 
Topics: Authentication, Authorization, Encryption
+
+  
+  Because the documentation has a dedicated chapter on security that applies to any Kafka deployment, this 
section focuses on additional considerations for multi-tenant environments.
+  
+
+  
+Security settings for Kafka fall into three main 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #527

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Format the revoking active log output in 
`StreamsPartitionAssignor` (#10242)

[github] MINOR: correct the error message of validating uint32 (#10193)

[github] MINOR: Time and log producer state recovery phases (#10241)


--
[...truncated 3.66 MB...]
LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV2ToV1Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterDownConversionV1ToV0Compressed() 
PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV0ToV2Compressed() 
PASSED

LogValidatorTest > testNonCompressedV1() STARTED

LogValidatorTest > testNonCompressedV1() PASSED

LogValidatorTest > testNonCompressedV2() STARTED

LogValidatorTest > testNonCompressedV2() PASSED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
STARTED

LogValidatorTest > testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed() 
PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV1() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV1() PASSED

LogValidatorTest > testInvalidCreateTimeCompressedV2() STARTED

LogValidatorTest > testInvalidCreateTimeCompressedV2() PASSED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() STARTED

LogValidatorTest > testNonIncreasingOffsetRecordBatchHasMetricsLogged() PASSED

LogValidatorTest > testRecompressionV1() STARTED

LogValidatorTest > testRecompressionV1() PASSED

LogValidatorTest > testRecompressionV2() STARTED

LogValidatorTest > testRecompressionV2() PASSED

ProducerStateManagerTest > testSkipEmptyTransactions() STARTED

ProducerStateManagerTest > testSkipEmptyTransactions() PASSED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() STARTED

ProducerStateManagerTest > testControlRecordBumpsProducerEpoch() PASSED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
STARTED

ProducerStateManagerTest > testProducerSequenceWithWrapAroundBatchRecord() 
PASSED

ProducerStateManagerTest > testCoordinatorFencing() STARTED

ProducerStateManagerTest > testCoordinatorFencing() PASSED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile() PASSED

ProducerStateManagerTest > testTruncateFullyAndStartAt() STARTED

ProducerStateManagerTest > testTruncateFullyAndStartAt() PASSED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() STARTED

ProducerStateManagerTest > testRemoveExpiredPidsOnReload() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() STARTED

ProducerStateManagerTest > testRecoverFromSnapshotFinishedTransaction() PASSED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() 
STARTED

ProducerStateManagerTest > testOutOfSequenceAfterControlRecordEpochBump() PASSED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() STARTED

ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation() PASSED

ProducerStateManagerTest > testTakeSnapshot() STARTED

ProducerStateManagerTest > testTakeSnapshot() PASSED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() 
STARTED

ProducerStateManagerTest > testRecoverFromSnapshotUnfinishedTransaction() PASSED

ProducerStateManagerTest > testDeleteSnapshotsBefore() STARTED

ProducerStateManagerTest > testDeleteSnapshotsBefore() PASSED

ProducerStateManagerTest > testAppendEmptyControlBatch() STARTED

ProducerStateManagerTest > testAppendEmptyControlBatch() PASSED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() STARTED

ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog() PASSED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
STARTED

ProducerStateManagerTest > testRemoveStraySnapshotsKeepCleanShutdownSnapshot() 
PASSED

ProducerStateManagerTest > testRemoveAllStraySnapshots() STARTED

ProducerStateManagerTest > testRemoveAllStraySnapshots() PASSED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() STARTED

ProducerStateManagerTest > testLoadFromEmptySnapshotFile() PASSED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
STARTED

ProducerStateManagerTest > testProducersWithOngoingTransactionsDontExpire() 
PASSED

ProducerStateManagerTest > testBasicIdMapping() STARTED

ProducerStateManagerTest > testBasicIdMapping() PASSED

ProducerStateManagerTest > updateProducerTransactionState() STARTED

ProducerStateManagerTest > updateProducerTransactionState() PASSED

ProducerStateManagerTest > testPrepareUpdateDoesNotMutate() 

[GitHub] [kafka-site] dajac commented on a change in pull request #334: KAFKA-12393: Document multi-tenancy considerations

2021-03-02 Thread GitBox


dajac commented on a change in pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#discussion_r585427746



##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 
highlights features and best practices to set up such shared environments, 
which should help you operate clusters that meet SLAs/OLAs and that minimize 
potential collateral damage caused by "noisy neighbors".
+  
+
+  
+Multi-tenancy is a many-sided subject, including but not limited to:
+  
+
+  
+Creating user spaces for tenants (sometimes called namespaces)
+Configuring topics with data retention policies and more
+Securing topics and clusters with encryption, authentication, and 
authorization
+Isolating tenants with quotas and rate limits
+Monitoring and metering
+Inter-cluster data sharing (cf. geo-replication)
+  
+
+  Creating User 
Spaces (Namespaces) For Tenants With Topic Naming
+
+  
+Kafka administrators operating a multi-tenant cluster typically need to 
define user spaces for each tenant. For the purpose of this section, "user 
spaces" are a collection of topics, which are grouped together under the 
management of a single entity or user.
+  
+
+  
+In Kafka, the main unit of data is the topic. Users can create and name 
each topic. They can also delete them, but it is not possible to rename a topic 
directly. Instead, to rename a topic, the user must create a new topic, move 
the messages from the original topic to the new, and then delete the original. 
With this in mind, it is recommended to define logical spaces, based on an 
hierarchical topic naming structure. This setup can then be combined with 
security features, such as prefixed ACLs, to isolate different spaces and 
tenants, while also minimizing the administrative overhead for securing the 
data in the cluster.
+  
+
+  
+These logical user spaces can be grouped in different ways, and the 
concrete choice depends on how your organization prefers to use your Kafka 
clusters. The most common groupings are as follows.
+  
+
+  
+By team or organizational unit: Here, the team is the main 
aggregator. In an organization where teams are the main user of the Kafka 
infrastructure, this might be the best grouping.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+
organization.team.dataset.event-name(e.g., "acme.infosec.telemetry.logins")
+  
+
+  
+By project or product: Here, a team manages more than one 
project. Their credentials will be different for each project, so all the 
controls and settings will always be project related.
+  
+
+  
+Example topic naming structure:
+  
+
+  
+project.product.event-name(e.g., "mobility.payments.suspicious")
+  
+
+  
+Certain information should normally not be put in a topic name, such as 
information that is likely to change over time (e.g., the name of the intended 
consumer) or that is a technical detail or metadata that is available elsewhere 
(e.g., the topic's partition count and other configuration settings).
+  
+
+  
+  To enforce a topic naming structure, it is useful to disable the Kafka 
feature to auto-create topics on demand by setting 
auto.create.topics.enable=false in the broker configuration. This 
stops users and applications from deliberately or inadvertently creating topics 
with arbitrary names, thus violating the naming structure. Then, you may want 
to put in place your own organizational process for controlled, yet automated 
creation of topics according to your naming convention, using scripting or your 
favorite automation toolkit.

Review comment:
   I am not sure to get your point here. While I do agree that disabling 
the auto topic creation is a good thing, users/apps can still create topics 
with the admin client so it does not really help to enforce a topic naming 
structure. In both cases, the topics would have to respect the ACLs in place 
and the "namespace" if defined.
   
   

##
File path: 27/ops.html
##
@@ -1090,7 +1090,157 @@ 
 
 
-  6.4 Kafka Configuration
+  6.4 Multi-Tenancy
+
+  Multi-Tenancy 
Overview
+
+  
+As a highly scalable event streaming platform, Kafka is used by many users 
as their central nervous system, connecting in real-time a wide range of 
different systems and applications from various teams and lines of businesses. 
Such multi-tenant cluster environments command proper control and management to 
ensure the peaceful coexistence of these different needs. This section 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #583

2021-03-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Format the revoking active log output in 
`StreamsPartitionAssignor` (#10242)

[github] MINOR: correct the error message of validating uint32 (#10193)


--
[...truncated 3.65 MB...]

ProducerFailureHandlingTest > testTooLargeRecordWithAckOne() PASSED

ProducerFailureHandlingTest > testWrongBrokerList() STARTED

ProducerFailureHandlingTest > testWrongBrokerList() PASSED

ProducerFailureHandlingTest > testNotEnoughReplicas() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicas() PASSED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testResponseTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNonExistentTopic() STARTED

ProducerFailureHandlingTest > testNonExistentTopic() PASSED

ProducerFailureHandlingTest > testInvalidPartition() STARTED

ProducerFailureHandlingTest > testInvalidPartition() PASSED

ProducerFailureHandlingTest > testSendAfterClosed() STARTED

ProducerFailureHandlingTest > testSendAfterClosed() PASSED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() STARTED

ProducerFailureHandlingTest > testTooLargeRecordWithAckZero() PASSED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
STARTED

ProducerFailureHandlingTest > testPartitionTooLargeForReplicationWithAckAll() 
PASSED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() STARTED

ProducerFailureHandlingTest > testNotEnoughReplicasAfterBrokerShutdown() PASSED

ApiVersionTest > testApiVersionUniqueIds() STARTED

ApiVersionTest > testApiVersionUniqueIds() PASSED

ApiVersionTest > testMinSupportedVersionFor() STARTED

ApiVersionTest > testMinSupportedVersionFor() PASSED

ApiVersionTest > testShortVersion() STARTED

ApiVersionTest > testShortVersion() PASSED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnAllKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > testApply() STARTED

ApiVersionTest > testApply() PASSED

ApiVersionTest > testMetadataQuorumApisAreDisabled() STARTED

ApiVersionTest > testMetadataQuorumApisAreDisabled() PASSED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
STARTED

ApiVersionTest > 
shouldReturnFeatureKeysWhenMagicIsCurrentValueAndThrottleMsIsDefaultThrottle() 
PASSED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
STARTED

ApiVersionTest > shouldCreateApiResponseOnlyWithKeysSupportedByMagicValue() 
PASSED

ApiVersionTest > testApiVersionValidator() STARTED

ApiVersionTest > testApiVersionValidator() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
PASSED

ControllerContextTest > testPreferredReplicaImbalanceMetric() STARTED

ControllerContextTest > testPreferredReplicaImbalanceMetric() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsExpectedReplicaAssignments() PASSED

ControllerContextTest > testReassignTo() STARTED

ControllerContextTest > testReassignTo() PASSED

ControllerContextTest > testPartitionReplicaAssignment() STARTED

ControllerContextTest > testPartitionReplicaAssignment() PASSED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() STARTED

ControllerContextTest > 
testUpdatePartitionFullReplicaAssignmentUpdatesReplicaAssignment() PASSED

ControllerContextTest > testReassignToIdempotence() STARTED

ControllerContextTest > testReassignToIdempotence() PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentReturnsEmptySeqIfTopicOrPartitionDoesNotExist() 
PASSED

DefaultMessageFormatterTest > [1] name=print nothing, 
record=ConsumerRecord(topic = someTopic, partition = 9, leaderEpoch = null, 
offset = 9876, CreateTime = 1234, serialized key size = 0, serialized value 
size = 0, headers = RecordHeaders(headers = [RecordHeader(key = h1, value = 
[118, 49]), RecordHeader(key = h2, value = [118, 50])], isReadOnly = false), 
key = [B@d4ca1b4, value = [B@23887ce4), properties=Map(print.value -> false), 
expected= STARTED

DefaultMessageFormatterTest > [1] name=print nothing, 

Re: [VOTE] KIP-699: Update FindCoordinator to resolve multiple Coordinators at a time

2021-03-02 Thread David Jacot
Thanks for the KIP, Mickael. +1 (binding)

On Mon, Mar 1, 2021 at 11:53 AM Tom Bentley  wrote:

> +1 (non-binding), thanks Mickael.
>
> On Thu, Feb 25, 2021 at 6:32 PM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I'd like to start a vote on KIP-699 to support resolving multiple
> > coordinators at a time:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-699%3A+Update+FindCoordinator+to+resolve+multiple+Coordinators+at+a+time
> >
> > Thanks
> >
> >
>


[DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-03-02 Thread Marco Aurélio Lotz
Hi folks,

I would like to invite everyone to discuss further KIP-718:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-718%3A+Make+KTable+Join+on+Foreign+key+unopinionated

I welcome all feedback on it.

Kind Regards,
Marco Lotz


[jira] [Created] (KAFKA-12400) Upgrade jetty to fix CVE-2020-27223

2021-03-02 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-12400:
---

 Summary: Upgrade jetty to fix CVE-2020-27223
 Key: KAFKA-12400
 URL: https://issues.apache.org/jira/browse/KAFKA-12400
 Project: Kafka
  Issue Type: Improvement
Reporter: Dongjin Lee
Assignee: Dongjin Lee
 Fix For: 2.8.0, 2.7.1, 2.6.2


h3. CVE-2020-27223 Detail

In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive), 10.0.0, and 
11.0.0 when Jetty handles a request containing multiple Accept headers with a 
large number of quality (i.e. q) parameters, the server may enter a denial of 
service (DoS) state due to high CPU usage processing those quality values, 
resulting in minutes of CPU time exhausted processing those quality values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-719: Add Log4J2 Appender

2021-03-02 Thread Dongjin Lee
Hi Kafka dev,

I would like to start the discussion of KIP-719: Add Log4J2 Appender.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Add+Log4J2+Appender

All kinds of feedbacks are greatly appreciated!

Best,
Dongjin

-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*



*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*