Build failed in Jenkins: kafka-2.4-jdk8 #132

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 5.48 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-2.3-jdk8 #166

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 2.98 MB...]
kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed STARTED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed PASSED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion STARTED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion PASSED

kafka.log.LogValidatorTest > testInvalidOffsetRangeAndRecordCount STARTED

kafka.log.LogValidatorTest > 

Build failed in Jenkins: kafka-2.2-jdk8 #22

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix broken connect transform test case due to older junit

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 2.54 MB...]
kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #1108

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 5.77 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED


Build failed in Jenkins: kafka-2.0-jdk8 #310

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 437.45 KB...]

kafka.log.LogCleanerTest > testSegmentGroupingFollowingLoadOfZeroIndex STARTED

kafka.log.LogCleanerTest > testSegmentGroupingFollowingLoadOfZeroIndex PASSED

kafka.log.LogCleanerTest > testLogToCleanWithUncleanableSection STARTED

kafka.log.LogCleanerTest > testLogToCleanWithUncleanableSection PASSED

kafka.log.LogCleanerTest > testBuildPartialOffsetMap STARTED

kafka.log.LogCleanerTest > testBuildPartialOffsetMap PASSED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages STARTED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow STARTED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow PASSED

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > 

Build failed in Jenkins: kafka-2.1-jdk8 #251

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 467.27 KB...]

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 PASSED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed PASSED

kafka.log.LogValidatorTest > testNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testNonCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 PASSED

kafka.log.LogValidatorTest > testRecompressionV1 STARTED

kafka.log.LogValidatorTest > testRecompressionV1 PASSED

kafka.log.LogValidatorTest > testRecompressionV2 STARTED

kafka.log.LogValidatorTest > testRecompressionV2 PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED


[jira] [Created] (KAFKA-9475) Replace transaction abortion scheduler with a delayed queue

2020-01-24 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9475:
--

 Summary: Replace transaction abortion scheduler with a delayed 
queue
 Key: KAFKA-9475
 URL: https://issues.apache.org/jira/browse/KAFKA-9475
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


Although we could try setting the txn timeout to be 10 second, the purging 
scheduler only works every one minute interval, so in the worst case we shall 
still wait for 1 minute. We are considering several potential fixes:
 # Change interval to 10 seconds: means we will have 6X frequent checking, more 
read contention on txn metadata. The benefit here is an easy one-line fix 
without correctness concern
 # Use an existing delayed queue, a.k.a purgatory. From what I heard, the 
purgatory needs at least 2 extra threads to work properly, with some add-on 
overhead for memory and complexity. The benefit here is more precise timeout 
reaction, without a redundant full metadata read lock.
 # Create a new delayed queue. This could be done by using scala delayed queue, 
the concern here is that whether this approach is production ready. Benefits 
are the same as 2, with less code complexity potentially

This ticket is to track #2 progress if we decide to go through this path 
eventually.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-24 Thread Bruno Cadonna
Thank you Matthias for the use cases!

Looking at both use cases, I think you need to elaborate on them in
the KIP, Richard.

Emit from plain KTable:
I agree with Matthias that the lower timestamp makes sense because it
marks the start of the validity of the record. Idempotent records with
a higher timestamp can be safely ignored. A corner case that I
discussed with Matthias offline is when we do not materialize a KTable
due to optimization. Then we cannot avoid the idempotent records
because we do not keep the first record with the lower timestamp to
compare to.

Emit from KTable with aggregations:
If we specify that an aggregation result should have the highest
timestamp of the records that participated in the aggregation, we
cannot ignore any idempotent records. Admittedly, the result of an
aggregation usually changes, but there are aggregations where the
result may not change like min and max, or sum when the incoming
records have a value of zero. In those cases, we could benefit of the
emit on change, but only if we define the semantics of the
aggregations to not use the highest timestamp of the participating
records for the result. In Kafka Streams, we do not have min, max, and
sum as explicit aggregations, but we need to provide an API to define
what timestamp should be used for the result of an aggregation if we
want to go down this path.

All of this does not block this KIP and I just wanted to put this
aspects up for discussion. The KIP can limit itself to emit from
materialized KTables. However, the limits should be explicitly stated
in the KIP.

Best,
Bruno



On Fri, Jan 24, 2020 at 10:58 AM Matthias J. Sax  wrote:
>
> IMHO, the question about semantics depends on the use case, in
> particular on the origin of a KTable.
>
> If there is a changlog topic that one reads directly into a KTable,
> emit-on-change does actually make sense, because the timestamp indicates
> _when_ the update was _effective_. For this case, it is semantically
> sound to _not_ update the timestamp in the store, because the second
> update is actually idempotent and advancing the timestamp is not ideal
> (one could even consider it to be wrong to advance the timestamp)
> because the "valid time" of the record pair did not change.
>
> This reasoning also applies to KTable-KTable joins.
>
> However, if the KTable is the result of an aggregation, I think
> emit-on-update is more natural, because the timestamp reflects the
> _last_ time (ie, highest timestamp) of all input records the contributed
> to the result. Hence, updating the timestamp and emitting a new record
> actually sounds correct to me. This applies to windowed and non-windowed
> aggregations IMHO.
>
> However, considering the argument that the timestamp should not be
> update in the first case in the store to begin with, both cases are
> actually the same, and both can be modeled as emit-on-change: if a
> `table()` operator does not update the timestamp if the value does not
> change, there is _no_ change and thus nothing is emitted. At the same
> time, if an aggregation operator does update the timestamp (even if the
> value does not change) there _is_ a change and we emit.
>
> Note that handling out-of-order data for aggregations would also work
> seamlessly with this approach -- for out-of-order records, the timestamp
> does never change, and thus, we only emit if the result itself changes.
>
> Therefore, I would argue that we might not even need any config, because
> the emit-on-change behavior is just correct and reduced the downstream
> load, while our current behavior is not ideal (even if it's also correct).
>
> Thoughts?
>
> -Matthias
>
> On 1/24/20 9:37 AM, John Roesler wrote:
> > Hi Bruno,
> >
> > Thanks for that idea. I hadn't considered that
> > option before, and it does seem like that would be
> > the right place to put it if we think it might be
> > semantically important to control on a
> > table-by-table basis.
> >
> > I had been thinking of it less semantically and
> > more practically. In the context of a large
> > topology, or more generally, a large software
> > system that contains many topologies and other
> > event-driven systems, each no-op result becomes an
> > input that is destined to itself become a no-op
> > result, and so on, all the way through the system.
> > Thus, a single pointless processing result becomes
> > amplified into a large number of pointless
> > computations, cache perturbations, and network
> > and disk I/O operations. If you also consider
> > operations with fan-out implications, like
> > branching or foreign-key joins, the wasted
> > resources are amplified not just in proportion to
> > the size of the system, but the size of the system
> > times the average fan-out (to the power of the
> > number of fan-out operations on the path(s)
> > through the system).
> >
> > In my time operating such systems, I've observed
> > these effects to be very real, and actually, the
> > system and use case doesn't have to be 

Build failed in Jenkins: kafka-2.2-jdk8-old #203

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H35 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision b706d93de720b3ad8a8478aae85c61ff001c7e64 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b706d93de720b3ad8a8478aae85c61ff001c7e64
Commit message: "KAFKA-9254; Overridden topic configs are reset after dynamic 
default change (#7870)"
 > git rev-list --no-walk 15808bfe01faeaff92fe7d499797784f8a815b27 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins4239535422499786370.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins4239535422499786370.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=b706d93de720b3ad8a8478aae85c61ff001c7e64, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-trunk-jdk8 #4184

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: MiniKdc JVM shutdown hook fix (#7946)

[jason] KAFKA-9254; Overridden topic configs are reset after dynamic default


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task 

[jira] [Resolved] (KAFKA-9254) Updating Kafka Broker configuration dynamically twice reverts log configuration to default

2020-01-24 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9254.

Fix Version/s: 2.4.1
   2.3.2
   2.2.3
   2.1.2
   2.0.2
   Resolution: Fixed

> Updating Kafka Broker configuration dynamically twice reverts log 
> configuration to default
> --
>
> Key: KAFKA-9254
> URL: https://issues.apache.org/jira/browse/KAFKA-9254
> Project: Kafka
>  Issue Type: Bug
>  Components: config, log, replication
>Affects Versions: 2.0.1
>Reporter: fenghong
>Assignee: huxihx
>Priority: Critical
> Fix For: 2.0.2, 2.1.2, 2.2.3, 2.3.2, 2.4.1
>
>
> We are engineers at Huobi and now encounter Kafka BUG 
> Modifying DynamicBrokerConfig more than 2 times will invalidate the topic 
> level unrelated configuration
> The bug reproduction method as follows:
>  # Set Kafka Broker config  server.properties min.insync.replicas=3
>  # Create topic test-1 and set topic‘s level config min.insync.replicas=2
>  # Dynamically modify the configuration twice as shown below
> {code:java}
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.message.timestamp.type=LogAppendTime
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.retention.ms=60480
> {code}
>  # stop a Kafka Server and found the Exception as shown below
>  org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync 
> replicas for partition test-1-0 is [2], below required minimum [3]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Brian Byrne
Thanks for reviewing, Anna,

In the describe call, the idea is that the API will match the QuotaFilters,
which can be specified in two ways for your example (filter is a type and a
matching string):

1) (USER="user-1") returns both
2) (USER="user-1", CLIENT_ID="") returns just /config/users/”user-1”

[For (2), it may be better to permit 'null' to indicate "omitted" instead
of the empty string.]

You're correct for the resolve case, although it may not be highlighting
the difference between them. Let's say you had the following:

/config/users/, value=100

Then --describe with (USER="user-1") returns empty since there's no config
entries for /config/users/”user-1”/*, whereas --resolve with
(USER="user-1") returns (value=100) since the user resolved to the default
user config. In other words, --describe is like doing
Admin#describeConfigs, whereas --resolve is like calling
ClientQuotaCallback#quotaLimit, if that helps any.

Thanks,
Brian



On Fri, Jan 24, 2020 at 3:19 PM Anna Povzner  wrote:

> Hi Brian,
>
> The KIP looks good!
>
> I have one clarification question regarding the distinction between
> describe and resolve API. Suppose I set request quota for
> /config/users/”user-1”/clients/"client-1" to 100 and request quota for
> /config/users/”user-1” to 200. Is this correct that describeClientQuotas
> called with /config/users/”user-1” would return two entries in the
> response?
>
>-
>
>/config/users/”user-1”/clients/, request quota type, value =
>100
>
>
>-
>
>/config/users/”user-1”, request quota type, value = 200
>
>
> While resolve API for entity "/config/users/”user-1” would return the quota
> setting specifically for /config/users/”user-1”, which is 200 in this case.
>
> Is my understanding correct?
>
> Thanks,
>
> Anna
>
>
> On Fri, Jan 24, 2020 at 11:32 AM Brian Byrne  wrote:
>
> > My apologies, Rajini. My hasty edits omitted a couple spots. I've done a
> > more thorough scan and should have cleaned up (1) and (2).
> >
> > For (3), Longs are chosen because that's what the ConfigCommand currently
> > uses, and because there's no floating point type in the RPC protocol.
> Longs
> > didn't seem to be an issue for bytes-per-second values, and
> > request_percentage is normalized [0-100], but you're right in that the
> > extensions might require this.
> >
> > To make Double compatible with the RPC protocol, we'd need to serialize
> the
> > value into a String, and then validate the value on the receiving end.
> > Would that be acceptable?
> >
> > Thanks,
> > Brian
> >
> > On Fri, Jan 24, 2020 at 11:07 AM Rajini Sivaram  >
> > wrote:
> >
> > > Thanks Brian. Looks good.
> > >
> > > Just a few minor points:
> > >
> > > 1) We can remove *public ResolveClientQuotasOptions
> > > setOmitOverriddenValues(boolean omitOverriddenValues); *
> > > 2) Under ClientQuotasCommand, the three items are List/Describe/Alter,
> > > rename to match the new naming for operations?
> > > 3) Request quota configs are doubles rather than long. And for
> > > ClientQuotaCallback API, we used doubles everywhere. Wasn't sure if we
> > > deliberately chose Longs for this API. if so, we should mention why
> under
> > > Rejected Alternatives. We actually use request quotas < 1 in
> integration
> > > tests to ensure we can throttle easily.
> > >
> > >
> > >
> > > On Fri, Jan 24, 2020 at 5:28 PM Brian Byrne 
> wrote:
> > >
> > > > Thanks again, Rajini,
> > > >
> > > > Units will have to be implemented on a per-config basis, then. I've
> > > removed
> > > > all language reference to units and replaced QuotaKey -> String
> (config
> > > > name). I've also renamed DescribeEffective -> Resolve, and replaced
> > > --list
> > > > with --describe, and --describe to --resolve to be consistent with
> the
> > > > config command and clear about what functionality is "new".
> > > >
> > > > Thanks,
> > > > Brian
> > > >
> > > > On Fri, Jan 24, 2020 at 2:27 AM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Brian,
> > > > >
> > > > > Thanks for the responses.
> > > > >
> > > > > 4) Yes, agree that it would be simpler to leave units out of the
> > > initial
> > > > > design. We currently have units that are interpreted by the
> > > configurable
> > > > > callback. The default callback interprets the value as
> > > > > per-broker-bytes-per-second and per-broker-percentage-cores. But
> > > > callbacks
> > > > > using partition-based throughput quotas for example would interpret
> > the
> > > > > value as cluster-wide-bytes-per-second. We could update callbacks
> to
> > > work
> > > > > with units, but as you said, it may be better to leave it out
> > initially
> > > > and
> > > > > address later.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne 
> > > wrote:
> > > > >
> > > > > > Thanks Rajini,
> > > > > >
> > > > > > 1) Good catch, fixed.
> > > > > >
> > > > > > 2) You're right. We'd need to extend
> ClientQuotaCallback#quotaLimit
> > > or
> > > > > add
> > > > > > 

Build failed in Jenkins: kafka-2.2-jdk8-old #202

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix broken connect transform test case due to older junit


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 15808bfe01faeaff92fe7d499797784f8a815b27 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 15808bfe01faeaff92fe7d499797784f8a815b27
Commit message: "HOTFIX: Fix broken connect transform test case due to older 
junit (#8004)"
 > git rev-list --no-walk f480bf67783d672d21251904bcd98087735b24fa # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins6966637756636917093.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins6966637756636917093.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=15808bfe01faeaff92fe7d499797784f8a815b27, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user b...@confluent.io


Build failed in Jenkins: kafka-trunk-jdk11 #1107

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-9152; Improve Sensor Retrieval (#7928)

[rajinisivaram] MINOR: MiniKdc JVM shutdown hook fix (#7946)


--
[...truncated 2.85 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4183

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-9152; Improve Sensor Retrieval (#7928)


--
[...truncated 2.83 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task 

Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Anna Povzner
Hi Brian,

The KIP looks good!

I have one clarification question regarding the distinction between
describe and resolve API. Suppose I set request quota for
/config/users/”user-1”/clients/"client-1" to 100 and request quota for
/config/users/”user-1” to 200. Is this correct that describeClientQuotas
called with /config/users/”user-1” would return two entries in the response?

   -

   /config/users/”user-1”/clients/, request quota type, value =
   100


   -

   /config/users/”user-1”, request quota type, value = 200


While resolve API for entity "/config/users/”user-1” would return the quota
setting specifically for /config/users/”user-1”, which is 200 in this case.

Is my understanding correct?

Thanks,

Anna


On Fri, Jan 24, 2020 at 11:32 AM Brian Byrne  wrote:

> My apologies, Rajini. My hasty edits omitted a couple spots. I've done a
> more thorough scan and should have cleaned up (1) and (2).
>
> For (3), Longs are chosen because that's what the ConfigCommand currently
> uses, and because there's no floating point type in the RPC protocol. Longs
> didn't seem to be an issue for bytes-per-second values, and
> request_percentage is normalized [0-100], but you're right in that the
> extensions might require this.
>
> To make Double compatible with the RPC protocol, we'd need to serialize the
> value into a String, and then validate the value on the receiving end.
> Would that be acceptable?
>
> Thanks,
> Brian
>
> On Fri, Jan 24, 2020 at 11:07 AM Rajini Sivaram 
> wrote:
>
> > Thanks Brian. Looks good.
> >
> > Just a few minor points:
> >
> > 1) We can remove *public ResolveClientQuotasOptions
> > setOmitOverriddenValues(boolean omitOverriddenValues); *
> > 2) Under ClientQuotasCommand, the three items are List/Describe/Alter,
> > rename to match the new naming for operations?
> > 3) Request quota configs are doubles rather than long. And for
> > ClientQuotaCallback API, we used doubles everywhere. Wasn't sure if we
> > deliberately chose Longs for this API. if so, we should mention why under
> > Rejected Alternatives. We actually use request quotas < 1 in integration
> > tests to ensure we can throttle easily.
> >
> >
> >
> > On Fri, Jan 24, 2020 at 5:28 PM Brian Byrne  wrote:
> >
> > > Thanks again, Rajini,
> > >
> > > Units will have to be implemented on a per-config basis, then. I've
> > removed
> > > all language reference to units and replaced QuotaKey -> String (config
> > > name). I've also renamed DescribeEffective -> Resolve, and replaced
> > --list
> > > with --describe, and --describe to --resolve to be consistent with the
> > > config command and clear about what functionality is "new".
> > >
> > > Thanks,
> > > Brian
> > >
> > > On Fri, Jan 24, 2020 at 2:27 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Brian,
> > > >
> > > > Thanks for the responses.
> > > >
> > > > 4) Yes, agree that it would be simpler to leave units out of the
> > initial
> > > > design. We currently have units that are interpreted by the
> > configurable
> > > > callback. The default callback interprets the value as
> > > > per-broker-bytes-per-second and per-broker-percentage-cores. But
> > > callbacks
> > > > using partition-based throughput quotas for example would interpret
> the
> > > > value as cluster-wide-bytes-per-second. We could update callbacks to
> > work
> > > > with units, but as you said, it may be better to leave it out
> initially
> > > and
> > > > address later.
> > > >
> > > >
> > > >
> > > > On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne 
> > wrote:
> > > >
> > > > > Thanks Rajini,
> > > > >
> > > > > 1) Good catch, fixed.
> > > > >
> > > > > 2) You're right. We'd need to extend ClientQuotaCallback#quotaLimit
> > or
> > > > add
> > > > > an alternate function. For the sake of an initial implementation,
> I'm
> > > > going
> > > > > to remove '--show-overridden', and a subsequent KIP will have to
> > > propose
> > > > an
> > > > > extents to ClientQuotaCallback to return more detailed information.
> > > > >
> > > > > 3) You're correct. I've removed the default.
> > > > >
> > > > > 4) The idea of the first iteration is be compatible with the
> existing
> > > > API,
> > > > > so no modification to start. The APIs should be kept consistent.
> If a
> > > > user
> > > > > wants to add custom functionality, say an entity type, they'll need
> > to
> > > > > update their ConfigEntityType any way, and the quotas APIs are
> meant
> > to
> > > > > handle that gracefully by accepting a String which can be
> propagated.
> > > > >
> > > > > The catch is 'units'. Part of the reason for having a default unit
> > was
> > > > for
> > > > > backwards compatibility, but maybe it's best to leave units out of
> > the
> > > > > initial design. This might lead to adding more configuration
> entries,
> > > but
> > > > > it's also the most flexible option. Thoughts?
> > > > >
> > > > > Thanks,
> > > > > Brian
> > > > >
> > > > >
> > > > > On Thu, Jan 23, 2020 at 4:57 AM Rajini Sivaram <
> > > 

Build failed in Jenkins: kafka-trunk-jdk11 #1106

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-7317: Use collections subscription for main consumer to reduce

[rhauch] Correct exception message in DistributedHerder (#7995)


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [VOTE] KIP-515: Hardened TLS Configs to ZooKeeper

2020-01-24 Thread Ron Dagostino
Hi everyone.  This concludes the vote for KIP 515.  The vote passes
with +1 binding votes from Gwen, Rajini, and Manikumar and a +1
non-binding vote from Mitch.  I have marked the KIP as "Accepted" and
opened a pull request at https://github.com/apache/kafka/pull/8003.

Ron



Ron

On Thu, Jan 23, 2020 at 9:35 AM Ron Dagostino  wrote:
>
> Hi everyone.  I discovered something minor while addressing the
> AclAuthorizer config inheritance issue that I need to document.  The
> minimum 3 days for voting is up and we could successfully conclude the
> vote with 3 +1 binding votes and a +1 non-binding vote, but I'll leave
> the vote open another day in case anyone needs to comment.  Here's the
> information.
>
> The "zookeeper.ssl.context.supplier.class" configuration doesn't
> actually exist in ZooKeeper 3.5.6.  The ZooKeeper admin guide
> documents it as being there, but it doesn't appear in the code.  This
> means we can't support it in KIP-515, and I am removing it.  I checked
> the latest ZooKeeper 3.6 SNAPSHOT, and it has been added.  So this
> config could probably be added to Kafka via a new, small KIP if/when
> we upgrade to ZooKeeper 3.6 (which looks to be in release-candidate
> stage at the moment).
>
>
> I've created https://issues.apache.org/jira/browse/KAFKA-9469 ("Add
> zookeeper.ssl.context.supplier.class config if/when adopting ZooKeeper
> 3.6") for this issue.
>
> Again, I will leave the vote open another day in case there needs to
> be any comment or discussion about this.
>
> Ron
>
> On Wed, Jan 22, 2020 at 8:56 AM Ron Dagostino  wrote:
> >
> > Hi everyone.  While finishing the PR for this KIP I realized that the
> > inheritance of TLS ZooKeeper configs that happens in the *authorizer*
> > does not reflect he spirit of our discussion.  In particular, based on
> > our inheritance discussion in the DISCUSS thread, the inheritance of
> > authorizer configs needn't be as constrained as it is currently
> > documented to be.  I am going to update the KIP as described below and
> > will assume there are no objections if nobody comments as such on this
> > VOTE thread.
> >
> > The KIP currently states that there is a limited inheritance for
> > authorizer ZooKeeper TLS configs as follows: "Every config can be
> > prefixed with "authorizer." for the case when
> > kafka.security.authorizer.AclAuthorizer connects via TLS to a
> > ZooKeeper quorum separate from the one that Kafka is using – this
> > specific use case will be identified in the configuration by
> > explicitly setting authorizer.zookeeper.ssl.client.enable=true."
> >
> > In other words, the authorizer inherits the broker's ZK TLS configs
> > *unless* it explicitly indicates via
> > authorizer.zookeeper.ssl.client.enable=true that it is going to use
> > its own configs, in which case inheritance does not occur -- i.e.
> > there is no overriding or merging going on where the broker's
> > ZooKeeper TLS configs act as a base upon which any "authorizer."
> > prefixed configs act as an overlay/override; instead, if you point to
> > another ZooKeeper quorum and want to change anything related to TLS
> > then you must restate everything.
> >
> > We had a discussion related to potentially inheriting a broker's
> > *non-ZooKeeper* TLS configs.  Inheritance was desirable, and I came
> > around to that way of thinking, but it turned out to be impossible to
> > do given that the broker's non-ZooKeeper TLS configs are potentially
> > stored in ZooKeeper.  Still, inheritance was desirable as a concept,
> > so we should do it for the authorizer since the broker's *ZooKeeper*
> > TLS configs are available in the config file.
> >
> > The KIP will now state that the broker's ZooKeeper TLS configs will
> > act as a base config upon which any "authorizer." ZooKeeper TLS
> > configs act as an overlay -- the configs are merged.  This is
> > consistent with how the other "authorizer." configs for ZooKeeper work
> > (connection/session timeouts and max inflight requests, for example).
> > This means that the order of evaluation for any particular authorizer
> > ZooKeeper TLS configuration will be:
> >
> > (1) system property
> > (2) broker non-prefixed ZooKeeper TLS config
> > (3) "authorizer." prefixed ZooKeeper TLS config
> >
> > Note that (1) + (2) simply yields the ZooKeeper TLS configs that the
> > broker is using -- with (2) overlaying (1) -- so any "authorizer."
> > prefixed ZooKeeper TLS configs are a true additional level of overlay
> > (again, consistent with the behavior of the ZooKeeper configs for
> > connection/session timeouts and max inflight requests).
> >
> > Ron
> >
> > On Mon, Jan 20, 2020 at 11:14 AM Manikumar  
> > wrote:
> > >
> > > +1 (binding).
> > >
> > > Thanks for the KIP.
> > >
> > > On Mon, Jan 20, 2020 at 9:21 PM Rajini Sivaram 
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Thanks for the KIP, Ron!
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Mon, Jan 20, 2020 at 3:36 PM Gwen Shapira  wrote:
> > 

Build failed in Jenkins: kafka-trunk-jdk8 #4182

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-7317: Use collections subscription for main consumer to reduce

[rhauch] Correct exception message in DistributedHerder (#7995)


--
[...truncated 2.84 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

[jira] [Resolved] (KAFKA-9152) Improve Sensor Retrieval

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9152.

Resolution: Fixed

> Improve Sensor Retrieval 
> -
>
> Key: KAFKA-9152
> URL: https://issues.apache.org/jira/browse/KAFKA-9152
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: highluck
>Priority: Minor
>  Labels: newbie, tech-debt
> Fix For: 2.5.0
>
>
> This ticket shall improve two aspects of the retrieval of sensors:
> 1. Currently, when a sensor is retrieved with {{*Metrics.*Sensor()}} (e.g. 
> {{ThreadMetrics.createTaskSensor()}}) after it was created with the same 
> method {{*Metrics.*Sensor()}}, the sensor is added again to the corresponding 
> queue in {{*Sensors}} (e.g. {{threadLevelSensors}}) in 
> {{StreamsMetricsImpl}}. Those queues are used to remove the sensors when 
> {{removeAll*LevelSensors()}} is called. Having multiple times the same 
> sensors in this queue is not an issue from a correctness point of view. 
> However, it would reduce the footprint to only store a sensor once in those 
> queues.
> 2. When a sensor is retrieved, the current code attempts to create a new 
> sensor and to add to it again the corresponding metrics. This could be 
> avoided.
>  
> Both aspects could be improved by checking whether a sensor already exists by 
> calling {{getSensor()}} on the {{Metrics}} object and checking the return 
> value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies

2020-01-24 Thread David Jacot
Hi all,

The vote has passed with +5 binding votes (Jason Gustafson, David Arthur,
Gwen Shapira,
Guozhang Wang, Harsha Chintalapani) and +2 non-binding votes (Eno
Thereska, Satish Duggana).

Thanks to everyone!

Best,
David

On Thu, Jan 23, 2020 at 10:46 PM Satish Duggana 
wrote:

> +1 (non-binding)
>
> On Fri, Jan 24, 2020 at 11:10 AM Harsha Chintalapani 
> wrote:
> >
> > +1 ( binding). Much needed!
> > -Harsha
> >
> >
> > On Thu, Jan 23, 2020 at 7:17 PM, Guozhang Wang 
> wrote:
> >
> > > +1 (binding)
> > >
> > > On Thu, Jan 23, 2020 at 1:55 PM Guozhang Wang 
> wrote:
> > >
> > > Yeah that makes sense, it is a good-to-have if we can push through
> this in
> > > 2.5 but if we do not have bandwidth that's fine too :)
> > >
> > > Guozhang
> > >
> > > On Thu, Jan 23, 2020 at 1:40 PM David Jacot 
> wrote:
> > >
> > > Hi Guozhang,
> > >
> > > Thank you for your input.
> > >
> > > 1) You're right. I've put it there due to the version bump only. I'll
> make
> > > it clearer.
> > >
> > > 2) I'd rather prefer to keep the scope as it is because 1) that field
> is
> > > not related to
> > > the problem that we are solving here and 2) I am not sure that I will
> have
> > > the
> > > bandwidth to do this before the feature freeze. The PR is already
> ready.
> > > That being
> > > said, as the addition of that field is part of KIP-429 and KIP-429 has
> > > already been
> > > accepted, we could give it a shot to avoid having to bump the version
> > > twice. I could
> > > try putting together a PR before the feature freeze but without
> guarantee.
> > > Does that
> > > make sense?
> > >
> > > David
> > >
> > > On Thu, Jan 23, 2020 at 9:44 AM Guozhang Wang 
> wrote:
> > >
> > > Hello David,
> > >
> > > Thanks for the KIP! I have read through the proposal and had one minor
> > >
> > > and
> > >
> > > one meta comment. But overall it looks good to me!
> > >
> > > 1) The JoinGroupRequest format does not have any new fields proposed,
> > >
> > > so we
> > >
> > > could either clarify that it is listed here but without modifications
> > >
> > > (only
> > >
> > > version bumps) or just remove it from the wiki.
> > >
> > > 2) Could we consider adding a "protocol version" to allow brokers to
> > >
> > > select
> > >
> > > the leader with the highest version? This thought is brought up in
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/
> > >
> KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-LookingintotheFuture:AssignorVersion
> > >
> > > .
> > > I'm fine with keeping this KIP's scope as is, just wondering if you
> feel
> > > comfortable piggy-backing this change as well if we are going to bump
> up
> > > the JoinGroupReq/Response anyways.
> > >
> > > Guozhang
> > >
> > > On Wed, Jan 22, 2020 at 9:10 AM Eno Thereska 
> > > wrote:
> > >
> > > This is awesome! +1 (non binding)
> > > Eno
> > >
> > > On Tue, Jan 21, 2020 at 10:00 PM Gwen Shapira 
> > >
> > > wrote:
> > >
> > > Thank you for the KIP. Awesomely cloud-native improvement :)
> > >
> > > +1 (binding)
> > >
> > > On Tue, Jan 21, 2020, 9:35 AM David Jacot 
> > >
> > > wrote:
> > >
> > > Hi all,
> > >
> > > I would like to start a vote on KIP-559: Make the Kafka Protocol
> > >
> > > Friendlier
> > >
> > > with L7 Proxies.
> > >
> > > The KIP is here:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/
> > > KIP-559%3A+Make+the+Kafka+Protocol+Friendlier+with+L7+Proxies
> > >
> > > Thanks,
> > > David
> > >
> > > --
> > > -- Guozhang
> > >
> > > --
> > > -- Guozhang
> > >
> > > --
> > > -- Guozhang
> > >
>


[jira] [Created] (KAFKA-9474) Kafka RPC protocol should support type 'double'

2020-01-24 Thread Brian Byrne (Jira)
Brian Byrne created KAFKA-9474:
--

 Summary: Kafka RPC protocol should support type 'double'
 Key: KAFKA-9474
 URL: https://issues.apache.org/jira/browse/KAFKA-9474
 Project: Kafka
  Issue Type: Improvement
Reporter: Brian Byrne
Assignee: Brian Byrne


Should be fairly straightforward. Useful for KIP-546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9462) Correct exception message in DistributedHerder

2020-01-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9462.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Thanks, [~yuzhih...@gmail.com]. Merged to the `trunk` branch. IMO backporting 
is not really warranted this since this is a very minor change to an exception 
message.

> Correct exception message in DistributedHerder
> --
>
> Key: KAFKA-9462
> URL: https://issues.apache.org/jira/browse/KAFKA-9462
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 2.5.0
>
>
> There are a few exception messages in DistributedHerder which were copied 
> from other exception message.
> This task corrects the messages to reflect actual condition



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9473) A large number of core system tests failing due to Kafka server failed to start on trunk

2020-01-24 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9473:
--

 Summary: A large number of core system tests failing due to Kafka 
server failed to start on trunk
 Key: KAFKA-9473
 URL: https://issues.apache.org/jira/browse/KAFKA-9473
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen


By running a full set of core system tests, we detected 38/166 test failures 
which are due to 

`FAIL: Kafka server didn't finish startup in 60 seconds`

need further investigation on this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7317) Use collections subscription for main consumer to reduce metadata

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7317.

Resolution: Fixed

> Use collections subscription for main consumer to reduce metadata
> -
>
> Key: KAFKA-7317
> URL: https://issues.apache.org/jira/browse/KAFKA-7317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sophie Blee-Goldman
>Priority: Major
> Fix For: 2.5.0
>
>
> In KAFKA-4633 we switched from "collection subscription" to "pattern 
> subscription" for `Consumer#subscribe()` to avoid triggering auto topic 
> creating on the broker. In KAFKA-5291, the metadata request was extended to 
> overwrite the broker config within the request itself. However, this feature 
> is only used in `KafkaAdminClient`. KAFKA-7320 adds this feature for the 
> consumer client, too.
> This ticket proposes to use the new feature within Kafka Streams to allow the 
> usage of collection based subscription in consumer and admit clients to 
> reduce the metadata response size than can be very large for large number of 
> partitions in the cluster.
> Note, that Streams need to be able to distinguish if it connects to older 
> brokers that do not support the new metadata request and still use pattern 
> subscription for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Brian Byrne
My apologies, Rajini. My hasty edits omitted a couple spots. I've done a
more thorough scan and should have cleaned up (1) and (2).

For (3), Longs are chosen because that's what the ConfigCommand currently
uses, and because there's no floating point type in the RPC protocol. Longs
didn't seem to be an issue for bytes-per-second values, and
request_percentage is normalized [0-100], but you're right in that the
extensions might require this.

To make Double compatible with the RPC protocol, we'd need to serialize the
value into a String, and then validate the value on the receiving end.
Would that be acceptable?

Thanks,
Brian

On Fri, Jan 24, 2020 at 11:07 AM Rajini Sivaram 
wrote:

> Thanks Brian. Looks good.
>
> Just a few minor points:
>
> 1) We can remove *public ResolveClientQuotasOptions
> setOmitOverriddenValues(boolean omitOverriddenValues); *
> 2) Under ClientQuotasCommand, the three items are List/Describe/Alter,
> rename to match the new naming for operations?
> 3) Request quota configs are doubles rather than long. And for
> ClientQuotaCallback API, we used doubles everywhere. Wasn't sure if we
> deliberately chose Longs for this API. if so, we should mention why under
> Rejected Alternatives. We actually use request quotas < 1 in integration
> tests to ensure we can throttle easily.
>
>
>
> On Fri, Jan 24, 2020 at 5:28 PM Brian Byrne  wrote:
>
> > Thanks again, Rajini,
> >
> > Units will have to be implemented on a per-config basis, then. I've
> removed
> > all language reference to units and replaced QuotaKey -> String (config
> > name). I've also renamed DescribeEffective -> Resolve, and replaced
> --list
> > with --describe, and --describe to --resolve to be consistent with the
> > config command and clear about what functionality is "new".
> >
> > Thanks,
> > Brian
> >
> > On Fri, Jan 24, 2020 at 2:27 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Brian,
> > >
> > > Thanks for the responses.
> > >
> > > 4) Yes, agree that it would be simpler to leave units out of the
> initial
> > > design. We currently have units that are interpreted by the
> configurable
> > > callback. The default callback interprets the value as
> > > per-broker-bytes-per-second and per-broker-percentage-cores. But
> > callbacks
> > > using partition-based throughput quotas for example would interpret the
> > > value as cluster-wide-bytes-per-second. We could update callbacks to
> work
> > > with units, but as you said, it may be better to leave it out initially
> > and
> > > address later.
> > >
> > >
> > >
> > > On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne 
> wrote:
> > >
> > > > Thanks Rajini,
> > > >
> > > > 1) Good catch, fixed.
> > > >
> > > > 2) You're right. We'd need to extend ClientQuotaCallback#quotaLimit
> or
> > > add
> > > > an alternate function. For the sake of an initial implementation, I'm
> > > going
> > > > to remove '--show-overridden', and a subsequent KIP will have to
> > propose
> > > an
> > > > extents to ClientQuotaCallback to return more detailed information.
> > > >
> > > > 3) You're correct. I've removed the default.
> > > >
> > > > 4) The idea of the first iteration is be compatible with the existing
> > > API,
> > > > so no modification to start. The APIs should be kept consistent. If a
> > > user
> > > > wants to add custom functionality, say an entity type, they'll need
> to
> > > > update their ConfigEntityType any way, and the quotas APIs are meant
> to
> > > > handle that gracefully by accepting a String which can be propagated.
> > > >
> > > > The catch is 'units'. Part of the reason for having a default unit
> was
> > > for
> > > > backwards compatibility, but maybe it's best to leave units out of
> the
> > > > initial design. This might lead to adding more configuration entries,
> > but
> > > > it's also the most flexible option. Thoughts?
> > > >
> > > > Thanks,
> > > > Brian
> > > >
> > > >
> > > > On Thu, Jan 23, 2020 at 4:57 AM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Brian,
> > > > >
> > > > > Thanks for the KIP. Looks good, hope we finally get this in!
> > > > >
> > > > > A few comments:
> > > > >
> > > > > 1) All the Admin interface methods seem to be using method names
> > > starting
> > > > > with upper-case letter, should be lower-case to be follow
> > conventions.
> > > > > 2) Effective quotas returns not only the actual effective quota,
> but
> > > also
> > > > > overridden values. I don't think this works with custom quota
> > > callbacks.
> > > > > 3) KIP says that all quotas are currently bytes-per-second and we
> > will
> > > > use
> > > > > RATE_BPS as the default. Request quotas are a percentage. So this
> > > doesn't
> > > > > quite work. We also need to consider how this works with custom
> quota
> > > > > callbacks. Can custom quota implementations define their own units?
> > > > > 4) We seem to be defining a new set of quota-related classes e.g.
> for
> > > > quota
> > > > > types, but we haven't considered what we do with 

[jira] [Resolved] (KAFKA-8821) Avoid pattern subscription to allow for stricter ACL settings

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8821.

Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/7969]

> Avoid pattern subscription to allow for stricter ACL settings
> -
>
> Key: KAFKA-8821
> URL: https://issues.apache.org/jira/browse/KAFKA-8821
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sophie Blee-Goldman
>Priority: Minor
> Fix For: 2.5.0
>
>
> To avoid triggering auto topic creation (if `auto.create.topic.enable=true` 
> on the brokers), Kafka Streams uses consumer pattern subscription. For this 
> case, the consumer requests all metadata from the brokers and does client 
> side filtering.
> However, if users want to set ACL to restrict a Kafka Streams application, 
> this may results in broker side ERROR logs that some metadata cannot be 
> provided. The only way to avoid those broker side ERROR logs is to grant 
> corresponding permissions.
> As of 2.3 release it's possible to disable auto topic creation client side 
> (via https://issues.apache.org/jira/browse/KAFKA-7320). Kafka Streams should 
> use this new feature (note, that broker version 0.11 is required) to allow 
> users to set strict ACLs without getting flooded with ERROR logs on the 
> broker.
> The proposal is that by default Kafka Streams disables auto-topic create 
> client side (optimistically) and uses regular subscription (not pattern 
> subscription). If an older broker is used, users need to explicitly enable 
> `allow.auto.create.topic` client side. If we detect this setting, we switch 
> back to pattern based subscription.
> If users don't enable auto topic create client side and run with an older 
> broker, we would just rethrow the exception to the user, adding some context 
> information on how to fix the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Rajini Sivaram
Thanks Brian. Looks good.

Just a few minor points:

1) We can remove *public ResolveClientQuotasOptions
setOmitOverriddenValues(boolean omitOverriddenValues); *
2) Under ClientQuotasCommand, the three items are List/Describe/Alter,
rename to match the new naming for operations?
3) Request quota configs are doubles rather than long. And for
ClientQuotaCallback API, we used doubles everywhere. Wasn't sure if we
deliberately chose Longs for this API. if so, we should mention why under
Rejected Alternatives. We actually use request quotas < 1 in integration
tests to ensure we can throttle easily.



On Fri, Jan 24, 2020 at 5:28 PM Brian Byrne  wrote:

> Thanks again, Rajini,
>
> Units will have to be implemented on a per-config basis, then. I've removed
> all language reference to units and replaced QuotaKey -> String (config
> name). I've also renamed DescribeEffective -> Resolve, and replaced --list
> with --describe, and --describe to --resolve to be consistent with the
> config command and clear about what functionality is "new".
>
> Thanks,
> Brian
>
> On Fri, Jan 24, 2020 at 2:27 AM Rajini Sivaram 
> wrote:
>
> > Hi Brian,
> >
> > Thanks for the responses.
> >
> > 4) Yes, agree that it would be simpler to leave units out of the initial
> > design. We currently have units that are interpreted by the configurable
> > callback. The default callback interprets the value as
> > per-broker-bytes-per-second and per-broker-percentage-cores. But
> callbacks
> > using partition-based throughput quotas for example would interpret the
> > value as cluster-wide-bytes-per-second. We could update callbacks to work
> > with units, but as you said, it may be better to leave it out initially
> and
> > address later.
> >
> >
> >
> > On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne  wrote:
> >
> > > Thanks Rajini,
> > >
> > > 1) Good catch, fixed.
> > >
> > > 2) You're right. We'd need to extend ClientQuotaCallback#quotaLimit or
> > add
> > > an alternate function. For the sake of an initial implementation, I'm
> > going
> > > to remove '--show-overridden', and a subsequent KIP will have to
> propose
> > an
> > > extents to ClientQuotaCallback to return more detailed information.
> > >
> > > 3) You're correct. I've removed the default.
> > >
> > > 4) The idea of the first iteration is be compatible with the existing
> > API,
> > > so no modification to start. The APIs should be kept consistent. If a
> > user
> > > wants to add custom functionality, say an entity type, they'll need to
> > > update their ConfigEntityType any way, and the quotas APIs are meant to
> > > handle that gracefully by accepting a String which can be propagated.
> > >
> > > The catch is 'units'. Part of the reason for having a default unit was
> > for
> > > backwards compatibility, but maybe it's best to leave units out of the
> > > initial design. This might lead to adding more configuration entries,
> but
> > > it's also the most flexible option. Thoughts?
> > >
> > > Thanks,
> > > Brian
> > >
> > >
> > > On Thu, Jan 23, 2020 at 4:57 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Brian,
> > > >
> > > > Thanks for the KIP. Looks good, hope we finally get this in!
> > > >
> > > > A few comments:
> > > >
> > > > 1) All the Admin interface methods seem to be using method names
> > starting
> > > > with upper-case letter, should be lower-case to be follow
> conventions.
> > > > 2) Effective quotas returns not only the actual effective quota, but
> > also
> > > > overridden values. I don't think this works with custom quota
> > callbacks.
> > > > 3) KIP says that all quotas are currently bytes-per-second and we
> will
> > > use
> > > > RATE_BPS as the default. Request quotas are a percentage. So this
> > doesn't
> > > > quite work. We also need to consider how this works with custom quota
> > > > callbacks. Can custom quota implementations define their own units?
> > > > 4) We seem to be defining a new set of quota-related classes e.g. for
> > > quota
> > > > types, but we haven't considered what we do with the existing API in
> > > > org.apache.kafka.server.quota. Should we keep these consistent? Are
> we
> > > > planning to deprecate some of those?
> > > >
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Wed, Jan 22, 2020 at 7:28 PM Brian Byrne 
> > wrote:
> > > >
> > > > > Hi Jason,
> > > > >
> > > > > I agree on (1). It was Colin's original suggestion, too, but he had
> > > > changed
> > > > > his mind in preference for enums. Strings are the more generic way
> > for
> > > > now,
> > > > > so hopefully Colin can share his thinking when he's back. The
> > > QuotaFilter
> > > > > usage was an error, this has been corrected.
> > > > >
> > > > > For (2), the config-centric mode is what we have today in
> > > CommandConfig:
> > > > > reading/altering the configuration as it's described. The
> > > > > DescribeEffectiveClientQuotas would be resolving the various config
> > > > entries
> > > > > to see what 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-24 Thread Matthias J. Sax
IMHO, the question about semantics depends on the use case, in
particular on the origin of a KTable.

If there is a changlog topic that one reads directly into a KTable,
emit-on-change does actually make sense, because the timestamp indicates
_when_ the update was _effective_. For this case, it is semantically
sound to _not_ update the timestamp in the store, because the second
update is actually idempotent and advancing the timestamp is not ideal
(one could even consider it to be wrong to advance the timestamp)
because the "valid time" of the record pair did not change.

This reasoning also applies to KTable-KTable joins.

However, if the KTable is the result of an aggregation, I think
emit-on-update is more natural, because the timestamp reflects the
_last_ time (ie, highest timestamp) of all input records the contributed
to the result. Hence, updating the timestamp and emitting a new record
actually sounds correct to me. This applies to windowed and non-windowed
aggregations IMHO.

However, considering the argument that the timestamp should not be
update in the first case in the store to begin with, both cases are
actually the same, and both can be modeled as emit-on-change: if a
`table()` operator does not update the timestamp if the value does not
change, there is _no_ change and thus nothing is emitted. At the same
time, if an aggregation operator does update the timestamp (even if the
value does not change) there _is_ a change and we emit.

Note that handling out-of-order data for aggregations would also work
seamlessly with this approach -- for out-of-order records, the timestamp
does never change, and thus, we only emit if the result itself changes.

Therefore, I would argue that we might not even need any config, because
the emit-on-change behavior is just correct and reduced the downstream
load, while our current behavior is not ideal (even if it's also correct).

Thoughts?

-Matthias

On 1/24/20 9:37 AM, John Roesler wrote:
> Hi Bruno,
> 
> Thanks for that idea. I hadn't considered that
> option before, and it does seem like that would be
> the right place to put it if we think it might be
> semantically important to control on a
> table-by-table basis.
> 
> I had been thinking of it less semantically and
> more practically. In the context of a large
> topology, or more generally, a large software
> system that contains many topologies and other
> event-driven systems, each no-op result becomes an
> input that is destined to itself become a no-op
> result, and so on, all the way through the system.
> Thus, a single pointless processing result becomes
> amplified into a large number of pointless
> computations, cache perturbations, and network 
> and disk I/O operations. If you also consider
> operations with fan-out implications, like
> branching or foreign-key joins, the wasted
> resources are amplified not just in proportion to
> the size of the system, but the size of the system
> times the average fan-out (to the power of the
> number of fan-out operations on the path(s)
> through the system).
> 
> In my time operating such systems, I've observed
> these effects to be very real, and actually, the
> system and use case doesn't have to be very large
> before the amplification poses an existential
> threat to the system as a whole.
> 
> This is the basis of my advocating for a simple
> behavior change, rather than an opt-in config of
> any kind. It seems like Streams should "do the
> right thing" for the majority use case. My theory
> (which may be wrong) is that the majority use case
> is more like "relational queries" than "CEP
> queries". Even if you were doing some
> event-sensitive computation, wouldn't you do them
> as Stream operations (where this feature is
> inapplicable anyway)?
> 
> In keeping with the "practical" perspective, I
> suggested the opt-out config only in the (I think
> unlikely) event that filtering out pointless
> updates actually harms performance. I'd also be
> perfectly fine without the opt-out config. I
> really think that (because of the timestamp
> semantics work already underway), we're already
> pre-fetching the prior result most of the time, so
> there would actually be very little extra I/O
> involved in implementing emit-on-change.
> 
> However, we should consider whether my experience
> is likely to be general. Do you have some use 
> case in mind for which you'd actually want some
> KTable results to be emit-on-update for semantic
> reasons?
> 
> Thanks,
> -John
> 
> 
> On Fri, Jan 24, 2020, at 11:02, Bruno Cadonna wrote:
>> Hi Richard,
>>
>> Thank you for the KIP.
>>
>> I agree with John that we should focus on the interface and behavior
>> change in a KIP. We can discuss the implementation later.
>>
>> I am also +1 for the survey.
>>
>> I had a thought about this. Couldn't we consider emit-on-change to be
>> one config of suppress (like `untilWindowCloses`)? What you basically
>> propose is to suppress updates if they do not change the result.
>> 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-24 Thread John Roesler
Hi Bruno,

Thanks for that idea. I hadn't considered that
option before, and it does seem like that would be
the right place to put it if we think it might be
semantically important to control on a
table-by-table basis.

I had been thinking of it less semantically and
more practically. In the context of a large
topology, or more generally, a large software
system that contains many topologies and other
event-driven systems, each no-op result becomes an
input that is destined to itself become a no-op
result, and so on, all the way through the system.
Thus, a single pointless processing result becomes
amplified into a large number of pointless
computations, cache perturbations, and network 
and disk I/O operations. If you also consider
operations with fan-out implications, like
branching or foreign-key joins, the wasted
resources are amplified not just in proportion to
the size of the system, but the size of the system
times the average fan-out (to the power of the
number of fan-out operations on the path(s)
through the system).

In my time operating such systems, I've observed
these effects to be very real, and actually, the
system and use case doesn't have to be very large
before the amplification poses an existential
threat to the system as a whole.

This is the basis of my advocating for a simple
behavior change, rather than an opt-in config of
any kind. It seems like Streams should "do the
right thing" for the majority use case. My theory
(which may be wrong) is that the majority use case
is more like "relational queries" than "CEP
queries". Even if you were doing some
event-sensitive computation, wouldn't you do them
as Stream operations (where this feature is
inapplicable anyway)?

In keeping with the "practical" perspective, I
suggested the opt-out config only in the (I think
unlikely) event that filtering out pointless
updates actually harms performance. I'd also be
perfectly fine without the opt-out config. I
really think that (because of the timestamp
semantics work already underway), we're already
pre-fetching the prior result most of the time, so
there would actually be very little extra I/O
involved in implementing emit-on-change.

However, we should consider whether my experience
is likely to be general. Do you have some use 
case in mind for which you'd actually want some
KTable results to be emit-on-update for semantic
reasons?

Thanks,
-John


On Fri, Jan 24, 2020, at 11:02, Bruno Cadonna wrote:
> Hi Richard,
> 
> Thank you for the KIP.
> 
> I agree with John that we should focus on the interface and behavior
> change in a KIP. We can discuss the implementation later.
> 
> I am also +1 for the survey.
> 
> I had a thought about this. Couldn't we consider emit-on-change to be
> one config of suppress (like `untilWindowCloses`)? What you basically
> propose is to suppress updates if they do not change the result.
> Considering emit on change as a flavour of suppress would be more
> flexible because it would specify the behavior locally for a KTable
> instead of globally for all KTables. Additionally, specifying the
> behavior in one place instead of multiple places feels more intuitive
> and consistent to me.
> 
> Best,
> Bruno
> 
> On Fri, Jan 24, 2020 at 7:49 AM John Roesler  wrote:
> >
> > Hi Richard,
> >
> > Thanks for picking this up! I know of at least one large community member
> > for which this feature is absolutely essential.
> >
> > If I understand your two options, it seems like the proposal is to implement
> > it as a behavior change regardless, and the question is whether to provide
> > an opt-out config or not.
> >
> > Given that any implementation of this feature would have some performance
> > impact under some workloads, and also that we don't know if anyone really
> > depends on emit-on-update time semantics, it seems like we should propose
> > to add an opt-out config. Can you update the KIP to mention the exact
> > config key and value(s) you'd propose?
> >
> > Just to move the discussion forward, maybe something like:
> > emit.on := change|update
> > with the new default being "change"
> >
> > Thanks for pointing out the timestamp issue in particular. I agree that if
> > we discard the latter update as a no-op, then we also have to discard its
> > timestamp (obviously, we don't forward the timestamp update, as that's
> > the whole point, but we also can't update the timestamp in the store, as
> > the store must remain consistent with what has been emitted).
> >
> > I have to confess that I disagree with your implementation proposal, but
> > it's also not necessary to discuss implementation in the KIP. Maybe it would
> > be less controversial if you just drop that section for now, so that the KIP
> > discussion can focus on the behavior change and config.
> >
> > Just for reference, there is some research into this domain. For example,
> > see the "Report" section (3.2.3) of the SECRET paper:
> > http://people.csail.mit.edu/tatbul/publications/maxstream_vldb10.pdf
> 

Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Brian Byrne
Thanks again, Rajini,

Units will have to be implemented on a per-config basis, then. I've removed
all language reference to units and replaced QuotaKey -> String (config
name). I've also renamed DescribeEffective -> Resolve, and replaced --list
with --describe, and --describe to --resolve to be consistent with the
config command and clear about what functionality is "new".

Thanks,
Brian

On Fri, Jan 24, 2020 at 2:27 AM Rajini Sivaram 
wrote:

> Hi Brian,
>
> Thanks for the responses.
>
> 4) Yes, agree that it would be simpler to leave units out of the initial
> design. We currently have units that are interpreted by the configurable
> callback. The default callback interprets the value as
> per-broker-bytes-per-second and per-broker-percentage-cores. But callbacks
> using partition-based throughput quotas for example would interpret the
> value as cluster-wide-bytes-per-second. We could update callbacks to work
> with units, but as you said, it may be better to leave it out initially and
> address later.
>
>
>
> On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne  wrote:
>
> > Thanks Rajini,
> >
> > 1) Good catch, fixed.
> >
> > 2) You're right. We'd need to extend ClientQuotaCallback#quotaLimit or
> add
> > an alternate function. For the sake of an initial implementation, I'm
> going
> > to remove '--show-overridden', and a subsequent KIP will have to propose
> an
> > extents to ClientQuotaCallback to return more detailed information.
> >
> > 3) You're correct. I've removed the default.
> >
> > 4) The idea of the first iteration is be compatible with the existing
> API,
> > so no modification to start. The APIs should be kept consistent. If a
> user
> > wants to add custom functionality, say an entity type, they'll need to
> > update their ConfigEntityType any way, and the quotas APIs are meant to
> > handle that gracefully by accepting a String which can be propagated.
> >
> > The catch is 'units'. Part of the reason for having a default unit was
> for
> > backwards compatibility, but maybe it's best to leave units out of the
> > initial design. This might lead to adding more configuration entries, but
> > it's also the most flexible option. Thoughts?
> >
> > Thanks,
> > Brian
> >
> >
> > On Thu, Jan 23, 2020 at 4:57 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Brian,
> > >
> > > Thanks for the KIP. Looks good, hope we finally get this in!
> > >
> > > A few comments:
> > >
> > > 1) All the Admin interface methods seem to be using method names
> starting
> > > with upper-case letter, should be lower-case to be follow conventions.
> > > 2) Effective quotas returns not only the actual effective quota, but
> also
> > > overridden values. I don't think this works with custom quota
> callbacks.
> > > 3) KIP says that all quotas are currently bytes-per-second and we will
> > use
> > > RATE_BPS as the default. Request quotas are a percentage. So this
> doesn't
> > > quite work. We also need to consider how this works with custom quota
> > > callbacks. Can custom quota implementations define their own units?
> > > 4) We seem to be defining a new set of quota-related classes e.g. for
> > quota
> > > types, but we haven't considered what we do with the existing API in
> > > org.apache.kafka.server.quota. Should we keep these consistent? Are we
> > > planning to deprecate some of those?
> > >
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Wed, Jan 22, 2020 at 7:28 PM Brian Byrne 
> wrote:
> > >
> > > > Hi Jason,
> > > >
> > > > I agree on (1). It was Colin's original suggestion, too, but he had
> > > changed
> > > > his mind in preference for enums. Strings are the more generic way
> for
> > > now,
> > > > so hopefully Colin can share his thinking when he's back. The
> > QuotaFilter
> > > > usage was an error, this has been corrected.
> > > >
> > > > For (2), the config-centric mode is what we have today in
> > CommandConfig:
> > > > reading/altering the configuration as it's described. The
> > > > DescribeEffectiveClientQuotas would be resolving the various config
> > > entries
> > > > to see what actually applies to a particular entity. The examples
> are a
> > > > little trivial, but the resolution can become much more complicated
> as
> > > the
> > > > number of config entries grows.
> > > >
> > > > List/describe aren't perfect either. Perhaps describe/resolve are a
> > > better
> > > > pair, with DescribeEffectiveClientQuotas -> ResolveClientQuotas?
> > > >
> > > > I appreciate the feedback!
> > > >
> > > > Thanks,
> > > > Brian
> > > >
> > > >
> > > >
> > > > On Tue, Jan 21, 2020 at 12:09 PM Jason Gustafson  >
> > > > wrote:
> > > >
> > > > > Hi Brian,
> > > > >
> > > > > Thanks for the proposal! I have a couple comments/questions:
> > > > >
> > > > > 1. I'm having a hard time understanding the point of
> > > `QuotaEntity.Type`.
> > > > It
> > > > > sounds like this might be just for convenience since the APIs are
> > using
> > > > > string types. If so, I think it's a bit misleading to represent it
> 

Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-24 Thread Bruno Cadonna
Hi Richard,

Thank you for the KIP.

I agree with John that we should focus on the interface and behavior
change in a KIP. We can discuss the implementation later.

I am also +1 for the survey.

I had a thought about this. Couldn't we consider emit-on-change to be
one config of suppress (like `untilWindowCloses`)? What you basically
propose is to suppress updates if they do not change the result.
Considering emit on change as a flavour of suppress would be more
flexible because it would specify the behavior locally for a KTable
instead of globally for all KTables. Additionally, specifying the
behavior in one place instead of multiple places feels more intuitive
and consistent to me.

Best,
Bruno

On Fri, Jan 24, 2020 at 7:49 AM John Roesler  wrote:
>
> Hi Richard,
>
> Thanks for picking this up! I know of at least one large community member
> for which this feature is absolutely essential.
>
> If I understand your two options, it seems like the proposal is to implement
> it as a behavior change regardless, and the question is whether to provide
> an opt-out config or not.
>
> Given that any implementation of this feature would have some performance
> impact under some workloads, and also that we don't know if anyone really
> depends on emit-on-update time semantics, it seems like we should propose
> to add an opt-out config. Can you update the KIP to mention the exact
> config key and value(s) you'd propose?
>
> Just to move the discussion forward, maybe something like:
> emit.on := change|update
> with the new default being "change"
>
> Thanks for pointing out the timestamp issue in particular. I agree that if
> we discard the latter update as a no-op, then we also have to discard its
> timestamp (obviously, we don't forward the timestamp update, as that's
> the whole point, but we also can't update the timestamp in the store, as
> the store must remain consistent with what has been emitted).
>
> I have to confess that I disagree with your implementation proposal, but
> it's also not necessary to discuss implementation in the KIP. Maybe it would
> be less controversial if you just drop that section for now, so that the KIP
> discussion can focus on the behavior change and config.
>
> Just for reference, there is some research into this domain. For example,
> see the "Report" section (3.2.3) of the SECRET paper:
> http://people.csail.mit.edu/tatbul/publications/maxstream_vldb10.pdf
>
> It might help to round out the proposal if you take a brief survey of the
> behaviors of other systems, along with pros and cons if any are reported.
>
> Thanks,
> -John
>
>
> On Fri, Jan 10, 2020, at 22:27, Richard Yu wrote:
> > Hi everybody!
> >
> > I'd like to propose a change that we probably should've added for a long
> > time now.
> >
> > The key benefit of this KIP would be reduced traffic in Kafka Streams since
> > a lot of no-op results would no longer be sent downstream.
> > Here is the KIP for reference.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams
> >
> > Currently, I seek to formalize our approach for this KIP first before we
> > determine concrete API additions / configurations.
> > Some configs might warrant adding, whiles others are not necessary since
> > adding them would only increase complexity of Kafka Streams.
> >
> > Cheers,
> > Richard
> >


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-01-24 Thread John Roesler
Hi Richard,

Thanks for picking this up! I know of at least one large community member 
for which this feature is absolutely essential.

If I understand your two options, it seems like the proposal is to implement
it as a behavior change regardless, and the question is whether to provide
an opt-out config or not.

Given that any implementation of this feature would have some performance
impact under some workloads, and also that we don't know if anyone really
depends on emit-on-update time semantics, it seems like we should propose
to add an opt-out config. Can you update the KIP to mention the exact
config key and value(s) you'd propose?

Just to move the discussion forward, maybe something like:
emit.on := change|update
with the new default being "change"

Thanks for pointing out the timestamp issue in particular. I agree that if
we discard the latter update as a no-op, then we also have to discard its
timestamp (obviously, we don't forward the timestamp update, as that's
the whole point, but we also can't update the timestamp in the store, as
the store must remain consistent with what has been emitted).

I have to confess that I disagree with your implementation proposal, but
it's also not necessary to discuss implementation in the KIP. Maybe it would
be less controversial if you just drop that section for now, so that the KIP
discussion can focus on the behavior change and config.

Just for reference, there is some research into this domain. For example,
see the "Report" section (3.2.3) of the SECRET paper: 
http://people.csail.mit.edu/tatbul/publications/maxstream_vldb10.pdf

It might help to round out the proposal if you take a brief survey of the
behaviors of other systems, along with pros and cons if any are reported.

Thanks,
-John


On Fri, Jan 10, 2020, at 22:27, Richard Yu wrote:
> Hi everybody!
> 
> I'd like to propose a change that we probably should've added for a long
> time now.
> 
> The key benefit of this KIP would be reduced traffic in Kafka Streams since
> a lot of no-op results would no longer be sent downstream.
> Here is the KIP for reference.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams
> 
> Currently, I seek to formalize our approach for this KIP first before we
> determine concrete API additions / configurations.
> Some configs might warrant adding, whiles others are not necessary since
> adding them would only increase complexity of Kafka Streams.
> 
> Cheers,
> Richard
>


[jira] [Resolved] (KAFKA-9471) Throw exception for DEAD StreamThread.State

2020-01-24 Thread Ted Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved KAFKA-9471.
---
Resolution: Duplicate

> Throw exception for DEAD StreamThread.State
> ---
>
> Key: KAFKA-9471
> URL: https://issues.apache.org/jira/browse/KAFKA-9471
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>
> In StreamThreadStateStoreProvider we have:
> {code}
> if (streamThread.state() == StreamThread.State.DEAD) {
> return Collections.emptyList();
> {code}
> If user cannot retry anymore, we should throw exception which is handled in 
> the else block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-567: Kafka Cluster Audit

2020-01-24 Thread Alexander Dunayevsky
Hello Igor,

Thanks for your KIP 
It would be great to adopt this functionality and getting the best of
tracking cluster activity.

+1 vote from me

Cheers,
Alex Dunayevsky


On Fri, 24 Jan 2020, 15:35 Игорь Мартемьянов,  wrote:

> Motivation:
>
>
> *It is highly demanded in most businesses to have the ability of obtaining
> audit information in case someone changes cluster configuration (like
> creation/deletion/modify/description of any topic or ACLs).We may add this
> ability. Since audit requirements are so broad, it's impractical to support
> all of them.Hence we have to provide ability for users to plug resources
> helping to achieve required capabilities.*
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-567%3A+Kafka+Cluster+Audit
>
>
> пт, 24 янв. 2020 г., 17:29 Игорь Мартемьянов :
>
> > Hello there.
> > Please review this KIP.
> > Thanks.
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4181

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9181; Maintain clean separation between local and group


--
[...truncated 5.73 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task 

Re: KIP-560 Discuss

2020-01-24 Thread John Roesler
Hi all, thanks for the explanation. I was also not sure how the kip would be 
possible to implement. 

No that it does seem plausible, my only feedback is that the command line 
option could align better with the existing one. That is, the existing option 
is called “—input-topics”, so it seems like the new one should be called 
“—all-input-topics”. 

Thanks,
John

On Fri, Jan 24, 2020, at 01:42, Boyang Chen wrote:
> Thanks Sophie for the explanation! I read Sang's PR and basically he did
> exactly what you proposed (check it here
>  in case I'm wrong).
> 
> I think Sophie's response answers Gwen's question already, while in the
> meantime for a KIP itself we are not required to mention all the internal
> details about how to make the changes happen (like how to actually get the
> external topics), considering the change scope is pretty small as well. But
> again, it would do no harm if we mention it inside Proposed Change session
> specifically so that people won't get confused about how.
> 
> 
> On Thu, Jan 23, 2020 at 8:26 PM Sophie Blee-Goldman 
> wrote:
> 
> > Hi all,
> >
> > I think what Gwen is trying to ask (correct me if I'm wrong) is how we can
> > infer which topics are associated with
> > Streams from the admin client's topic list. I agree that this doesn't seem
> > possible, since as she pointed out the
> > topics list (or even description) lacks the specific information we need.
> >
> > What we could do instead is use the admin client's
> > `describeConsumerGroups` API to get the information
> > on the Streams app's consumer group specifically -- note that the Streams
> > application.id config is also used
> > as the consumer group id, so each app forms a group to read from the input
> > topics. We could compile a list
> > of these topics just by looking at each member's assignment (and even check
> > for a StreamsPartitionAssignor
> > to verify that this is indeed a Streams app group, if we're being
> > paranoid).
> >
> > The reset tool actually already gets the consumer group description, in
> > order to validate there are no active
> > consumers in the group. We may as well grab the list of topics from it
> > while it's there. Or did you have something
> > else in mind?
> >
> > On Sat, Jan 18, 2020 at 6:17 PM Sang wn Lee  wrote:
> >
> > > Thank you
> > >
> > > I understand you
> > >
> > > 1. admin client has topic list
> > > 2. applicationId can only have one stream, so It won't be a problem!
> > > 3. For example, --input-topic [reg]
> > > Allowing reg solves some inconvenience
> > >
> > >
> > > On 2020/01/18 18:15:23, Gwen Shapira  wrote:
> > > > I am not sure I follow. Afaik:
> > > >
> > > > 1. Topics don't include client ID information
> > > > 2. Even if you did, the same ID could be used for topics that are not
> > > Kafka
> > > > Streams input
> > > >
> > > > The regex idea sounds doable, but I'm not sure it solves much?
> > > >
> > > >
> > > > On Sat, Jan 18, 2020, 7:12 AM Sang wn Lee 
> > wrote:
> > > >
> > > > > Thank you
> > > > > Gwen Shapira!
> > > > > We'll add a flag to clear all topics by clientId
> > > > > It is ‘reset-all-external-topics’
> > > > >
> > > > > I also want to use regex on the input topic flag to clear all
> > matching
> > > > > topics.
> > > > >
> > > > > On 2020/01/17 19:29:09, Gwen Shapira  wrote:
> > > > > > Seem like a very nice improvement to me. But I have to admit that I
> > > > > > don't understand how this will how - how could you infer the input
> > > > > > topics?
> > > > > >
> > > > > > On Thu, Jan 16, 2020 at 10:03 AM Sang wn Lee  > >
> > > > > wrote:
> > > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > Starting this thread to discuss KIP-560:
> > > > > > > wiki link :
> > > > > > >
> > > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-560%3A+Auto+infer+external+topic+partitions+in+stream+reset+tool
> > > > > > >
> > > > > > > I'm newbie
> > > > > > > I would like to receive feedback on the following features!
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk11 #1105

2020-01-24 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9181; Maintain clean separation between local and group


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED


Re: [DISCUSS] KIP-567: Kafka Cluster Audit

2020-01-24 Thread Игорь Мартемьянов
Motivation:


*It is highly demanded in most businesses to have the ability of obtaining
audit information in case someone changes cluster configuration (like
creation/deletion/modify/description of any topic or ACLs).We may add this
ability. Since audit requirements are so broad, it's impractical to support
all of them.Hence we have to provide ability for users to plug resources
helping to achieve required capabilities.*

https://cwiki.apache.org/confluence/display/KAFKA/KIP-567%3A+Kafka+Cluster+Audit


пт, 24 янв. 2020 г., 17:29 Игорь Мартемьянов :

> Hello there.
> Please review this KIP.
> Thanks.
>


[DISCUSS] KIP-567: Kafka Cluster Audit

2020-01-24 Thread Игорь Мартемьянов
Hello there.
Please review this KIP.
Thanks.


[jira] [Resolved] (KAFKA-9181) Flaky test kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2020-01-24 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-9181.
---
Fix Version/s: 2.5.0
 Reviewer: Jason Gustafson
   Resolution: Fixed

> Flaky test 
> kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
> ---
>
> Key: KAFKA-9181
> URL: https://issues.apache.org/jira/browse/KAFKA-9181
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Reporter: Bill Bejeck
>Assignee: Rajini Sivaram
>Priority: Major
>  Labels: flaky-test, tests
> Fix For: 2.5.0
>
>
> Failed in 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/26571/testReport/junit/kafka.api/SaslGssapiSslEndToEndAuthorizationTest/testNoConsumeWithoutDescribeAclViaSubscribe/]
>  
> {noformat}
> Error Messageorg.apache.kafka.common.errors.TopicAuthorizationException: Not 
> authorized to access topics: 
> [topic2]Stacktraceorg.apache.kafka.common.errors.TopicAuthorizationException: 
> Not authorized to access topics: [topic2]
> Standard OutputAdding ACLs for resource 
> `ResourcePattern(resourceType=CLUSTER, name=kafka-cluster, 
> patternType=LITERAL)`: 
>   (principal=User:kafka, host=*, operation=CLUSTER_ACTION, 
> permissionType=ALLOW) 
> Current ACLs for resource `Cluster:LITERAL:kafka-cluster`: 
>   User:kafka has Allow permission for operations: ClusterAction from 
> hosts: * 
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=*, 
> patternType=LITERAL)`: 
>   (principal=User:kafka, host=*, operation=READ, permissionType=ALLOW) 
> Current ACLs for resource `Topic:LITERAL:*`: 
>   User:kafka has Allow permission for operations: Read from hosts: * 
> Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt 
> false ticketCache is null isInitiator true KeyTab is 
> /tmp/kafka6494439724844851846.tmp refreshKrb5Config is false principal is 
> kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
> storePass is false clearPass is false
> principal is kafka/localh...@example.com
> Will use keytab
> Commit Succeeded 
> [2019-11-13 04:43:16,187] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-11-13 04:43:16,191] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-11-13 04:43:16,384] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
> fetcherId=0] Error for partition e2etopic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-11-13 04:43:16,384] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error for partition e2etopic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=e2etopic, 
> patternType=LITERAL)`: 
>   (principal=User:client, host=*, operation=WRITE, permissionType=ALLOW)
>   (principal=User:client, host=*, operation=DESCRIBE, 
> permissionType=ALLOW)
>   (principal=User:client, host=*, operation=CREATE, permissionType=ALLOW) 
> Current ACLs for resource `Topic:LITERAL:e2etopic`: 
>   User:client has Allow permission for operations: Describe from hosts: *
>   User:client has Allow permission for operations: Write from hosts: *
>   User:client has Allow permission for operations: Create from hosts: * 
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=e2etopic, 
> patternType=LITERAL)`: 
>   (principal=User:client, host=*, operation=READ, permissionType=ALLOW)
>   (principal=User:client, host=*, operation=DESCRIBE, 
> permissionType=ALLOW) 
> Adding ACLs for resource `ResourcePattern(resourceType=GROUP, name=group, 
> patternType=LITERAL)`: 
>   (principal=User:client, host=*, operation=READ, permissionType=ALLOW) 
> Current ACLs for resource `Topic:LITERAL:e2etopic`: 
>   User:client has Allow permission for operations: Read from hosts: *
>   User:client has Allow permission for operations: Describe from hosts: *
>   User:client has Allow permission for operations: Write from hosts: *
>   User:client has 

Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-24 Thread Rajini Sivaram
Hi Brian,

Thanks for the responses.

4) Yes, agree that it would be simpler to leave units out of the initial
design. We currently have units that are interpreted by the configurable
callback. The default callback interprets the value as
per-broker-bytes-per-second and per-broker-percentage-cores. But callbacks
using partition-based throughput quotas for example would interpret the
value as cluster-wide-bytes-per-second. We could update callbacks to work
with units, but as you said, it may be better to leave it out initially and
address later.



On Thu, Jan 23, 2020 at 6:29 PM Brian Byrne  wrote:

> Thanks Rajini,
>
> 1) Good catch, fixed.
>
> 2) You're right. We'd need to extend ClientQuotaCallback#quotaLimit or add
> an alternate function. For the sake of an initial implementation, I'm going
> to remove '--show-overridden', and a subsequent KIP will have to propose an
> extents to ClientQuotaCallback to return more detailed information.
>
> 3) You're correct. I've removed the default.
>
> 4) The idea of the first iteration is be compatible with the existing API,
> so no modification to start. The APIs should be kept consistent. If a user
> wants to add custom functionality, say an entity type, they'll need to
> update their ConfigEntityType any way, and the quotas APIs are meant to
> handle that gracefully by accepting a String which can be propagated.
>
> The catch is 'units'. Part of the reason for having a default unit was for
> backwards compatibility, but maybe it's best to leave units out of the
> initial design. This might lead to adding more configuration entries, but
> it's also the most flexible option. Thoughts?
>
> Thanks,
> Brian
>
>
> On Thu, Jan 23, 2020 at 4:57 AM Rajini Sivaram 
> wrote:
>
> > Hi Brian,
> >
> > Thanks for the KIP. Looks good, hope we finally get this in!
> >
> > A few comments:
> >
> > 1) All the Admin interface methods seem to be using method names starting
> > with upper-case letter, should be lower-case to be follow conventions.
> > 2) Effective quotas returns not only the actual effective quota, but also
> > overridden values. I don't think this works with custom quota callbacks.
> > 3) KIP says that all quotas are currently bytes-per-second and we will
> use
> > RATE_BPS as the default. Request quotas are a percentage. So this doesn't
> > quite work. We also need to consider how this works with custom quota
> > callbacks. Can custom quota implementations define their own units?
> > 4) We seem to be defining a new set of quota-related classes e.g. for
> quota
> > types, but we haven't considered what we do with the existing API in
> > org.apache.kafka.server.quota. Should we keep these consistent? Are we
> > planning to deprecate some of those?
> >
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Wed, Jan 22, 2020 at 7:28 PM Brian Byrne  wrote:
> >
> > > Hi Jason,
> > >
> > > I agree on (1). It was Colin's original suggestion, too, but he had
> > changed
> > > his mind in preference for enums. Strings are the more generic way for
> > now,
> > > so hopefully Colin can share his thinking when he's back. The
> QuotaFilter
> > > usage was an error, this has been corrected.
> > >
> > > For (2), the config-centric mode is what we have today in
> CommandConfig:
> > > reading/altering the configuration as it's described. The
> > > DescribeEffectiveClientQuotas would be resolving the various config
> > entries
> > > to see what actually applies to a particular entity. The examples are a
> > > little trivial, but the resolution can become much more complicated as
> > the
> > > number of config entries grows.
> > >
> > > List/describe aren't perfect either. Perhaps describe/resolve are a
> > better
> > > pair, with DescribeEffectiveClientQuotas -> ResolveClientQuotas?
> > >
> > > I appreciate the feedback!
> > >
> > > Thanks,
> > > Brian
> > >
> > >
> > >
> > > On Tue, Jan 21, 2020 at 12:09 PM Jason Gustafson 
> > > wrote:
> > >
> > > > Hi Brian,
> > > >
> > > > Thanks for the proposal! I have a couple comments/questions:
> > > >
> > > > 1. I'm having a hard time understanding the point of
> > `QuotaEntity.Type`.
> > > It
> > > > sounds like this might be just for convenience since the APIs are
> using
> > > > string types. If so, I think it's a bit misleading to represent it as
> > an
> > > > enum. In particular, I cannot see how the UNKNOWN type would be used.
> > The
> > > > `PrincipalBuilder` plugin allows users to provide their own principal
> > > type,
> > > > so I think the API should be usable even for unknown entity types.
> Note
> > > > also that we appear to be relying on this enum in `QuotaFilter`
> class.
> > I
> > > > think that should be changed to just a string?
> > > >
> > > > 2. It's a little annoying that we have two separate APIs to describe
> > > client
> > > > quotas. The names do not really make it clear which API someone
> should
> > > use.
> > > > It might just be a naming problem. In the command utility, it looks
> > like
> > > > you are using --list