Build failed in Jenkins: kafka-2.1-jdk8 #6

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-7484: fix suppression integration tests (#5748)

--
[...truncated 884.26 KB...]
kafka.cluster.ReplicaTest > testCannotDeleteSegmentsAtOrAboveHighWatermark 
PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable STARTED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.MirrorMakerIntegrationTest > 
testCommitOffsetsRemoveNonExistentTopics STARTED

kafka.tools.MirrorMakerIntegrationTest > 
testCommitOffsetsRemoveNonExistentTopics PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testConfig STARTED

kafka.tools.ConsumerPerformanceTest > testConfig PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #3072

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Ensure consumers are closed in DynamicBrokerReconfigurationTest

[ismael] MINOR: Add topic config to PartitionsSpec (#5523)

[jason] KAFKA-7467; NoSuchElementException is raised because controlBatch is

[jason] KAFKA-6914; Set parent classloader of DelegatingClassLoader same as the

[github] KAFKA-7395; Add fencing to replication protocol (KIP-320) (#5661)

[github] HOTFIX: Compilation error in GroupMetadataManagerTest (#5752)

[mjsax] KAFKA-7484: fix suppression integration tests (#5748)

--
[...truncated 2.73 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest 

Jenkins build is back to normal : kafka-trunk-jdk10 #571

2018-10-05 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.0-jdk8 #164

2018-10-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.1-jdk8 #5

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Ensure consumers are closed in DynamicBrokerReconfigurationTest

[jason] KAFKA-7467; NoSuchElementException is raised because controlBatch is

[jason] KAFKA-6914; Set parent classloader of DelegatingClassLoader same as the

[jason] KAFKA-7395; Add fencing to replication protocol (KIP-320) (#5661)

[jason] HOTFIX: Compilation error in GroupMetadataManagerTest (#5752)

--
[...truncated 2.52 MB...]

org.apache.kafka.streams.kstream.internals.KStreamWindowReduceTest > 
shouldLogAndMeterOnNullKey PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowReduceTest > 
shouldLogAndMeterOnExpiredEvent STARTED

org.apache.kafka.streams.kstream.internals.KStreamWindowReduceTest > 
shouldLogAndMeterOnExpiredEvent PASSED

org.apache.kafka.streams.kstream.WindowsTest > retentionTimeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > retentionTimeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > numberOfSegmentsMustBeAtLeastTwo 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > numberOfSegmentsMustBeAtLeastTwo 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetWindowRetentionTime 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetWindowRetentionTime 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetNumberOfSegments STARTED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetNumberOfSegments PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
startTimeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
startTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > shouldThrowOnUntil 
STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > shouldThrowOnUntil 
PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedOnWindowStart STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedOnWindowStart PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldSetWindowStartTime STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldSetWindowStartTime PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
equalsAndHashcodeShouldBeValidForPositiveCases STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
equalsAndHashcodeShouldBeValidForPositiveCases PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedAfterWindowStart STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldIncludeRecordsThatHappenedAfterWindowStart PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
equalsAndHashcodeShouldBeValidForNegativeCases STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
equalsAndHashcodeShouldBeValidForNegativeCases PASSED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldExcludeRecordsThatHappenedBeforeWindowStart STARTED

org.apache.kafka.streams.kstream.UnlimitedWindowsTest > 
shouldExcludeRecordsThatHappenedBeforeWindowStart PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
gracePeriodShouldEnforceBoundaries STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
gracePeriodShouldEnforceBoundaries PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
endTimeShouldNotBeBeforeStart STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
endTimeShouldNotBeBeforeStart PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > startTimeShouldNotBeAfterEnd 
STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > startTimeShouldNotBeAfterEnd 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
equalsAndHashcodeShouldBeValidForPositiveCases STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
equalsAndHashcodeShouldBeValidForPositiveCases PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
untilShouldSetMaintainDuration STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
untilShouldSetMaintainDuration PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
equalsAndHashcodeShouldBeValidForNegativeCases STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
equalsAndHashcodeShouldBeValidForNegativeCases PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.WindowTest > 

Build failed in Jenkins: kafka-trunk-jdk10 #570

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Ensure consumers are closed in DynamicBrokerReconfigurationTest

[ismael] MINOR: Add topic config to PartitionsSpec (#5523)

[jason] KAFKA-7467; NoSuchElementException is raised because controlBatch is

[jason] KAFKA-6914; Set parent classloader of DelegatingClassLoader same as the

--
[...truncated 2.25 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED


Jenkins build is back to normal : kafka-1.0-jdk7 #247

2018-10-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3071

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Clarify usage of stateful processor node (#5740)

[me] MINOR: Switch to use AWS spot instances

--
[...truncated 2.72 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = true] STARTED


Build failed in Jenkins: kafka-2.0-jdk8 #163

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Switch to use AWS spot instances

--
[...truncated 883.22 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldDropEntriesOnEpochBoundaryWhenRemovingLatestEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecordedAndUndefinedEpochRequested STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecordedAndUndefinedEpochRequested PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingEpochs STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceOffsetsIncreaseMonotonically STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceOffsetsIncreaseMonotonically PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingStartOffsets STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldEnforceMonotonicallyIncreasingStartOffsets PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED


[jira] [Created] (KAFKA-7488) Controller not recovering after disconnection to zookeeper

2018-10-05 Thread Luigi Tagliamonte (JIRA)
Luigi Tagliamonte created KAFKA-7488:


 Summary: Controller not recovering after disconnection to zookeeper
 Key: KAFKA-7488
 URL: https://issues.apache.org/jira/browse/KAFKA-7488
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 1.1.0
Reporter: Luigi Tagliamonte


This issue seems related to https://issues.apache.org/jira/browse/KAFKA-2729 
that has been resolved here https://issues.apache.org/jira/browse/KAFKA-2729

The issue still exists in Kafka 1.1

Cluster details:
 * 3 Kafka nodes cluster running 1.1
 * 3 Zookeeper node cluster running 3.4.10

Today meanwhile I was replacing a zookeeper server the leader among the brokers 
experienced this issue:
{code:java}
[2018-10-05 21:03:02,799] INFO [GroupMetadataManager brokerId=1] Removed 0 
expired offsets in 0 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)
[2018-10-05 21:08:20,060] INFO Unable to read additional data from server 
sessionid 0x34663b434985000e, likely server has closed socket, closing socket 
connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,001] INFO Opening socket connection to server 
10.48.208.70/10.48.208.70:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,003] WARN Session 0x34663b434985000e for server null, 
unexpected error, closing socket connection and attempting reconnect 
(org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2018-10-05 21:08:21,797] INFO Opening socket connection to server 
10.48.210.44/10.48.210.44:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,799] INFO Socket connection established to 
10.48.210.44/10.48.210.44:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:21,802] INFO Session establishment complete on server 
10.48.210.44/10.48.210.44:2181, sessionid = 0x34663b434985000e, negotiated 
timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,015] INFO Creating /controller (is it secure? false) 
(kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] ERROR Error while creating ephemeral at /controller, 
node already exists and owner '3703712903740981258' does not match current 
session '3775770497779040270' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:28,025] INFO Result of znode creation at /controller is: 
NODEEXISTS (kafka.zk.KafkaZkClient)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,561] INFO [Partition -store-changelog-7 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition -store-changelog-7 broker=1] 
Cached zkVersion [11] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,569] INFO [Partition bycontact_0-19 broker=1] 
Shrinking ISR from 2,1,3 to 1 (kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition)
[2018-10-05 21:08:42,574] INFO [Partition bycontact_0-19 broker=1] 
Cached zkVersion [44] not equal to that in zookeeper, skip updating ISR 
(kafka.cluster.Partition){code}
 The only way in order to recover was to restart the broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-0.10.2-jdk7 #236

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Switch to use AWS spot instances

--
[...truncated 1.33 MB...]

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldThrowNullPointerExceptionWhenTryingToAddNullElement STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldThrowNullPointerExceptionWhenTryingToAddNullElement PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLowestAvailableTimestampFromAllInputs STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLowestAvailableTimestampFromAllInputs PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnTimestampOfOnlyRecord STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnTimestampOfOnlyRecord PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLowestAvailableTimestampAfterPreviousLowestRemoved STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLowestAvailableTimestampAfterPreviousLowestRemoved PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldIgnoreNullRecordOnRemove STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldIgnoreNullRecordOnRemove PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLastKnownTimestampWhenAllElementsHaveBeenRemoved STARTED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
shouldReturnLastKnownTimestampWhenAllElementsHaveBeenRemoved PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldComputeGroupingForSingleGroupWithMultipleTopics STARTED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldComputeGroupingForSingleGroupWithMultipleTopics PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldNotCreateAnyTasksBecauseOneTopicHasUnknownPartitions STARTED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldNotCreateAnyTasksBecauseOneTopicHasUnknownPartitions PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldComputeGroupingForTwoGroups STARTED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > 
shouldComputeGroupingForTwoGroups PASSED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
logAndSkipOnInvalidTimestamp STARTED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
logAndSkipOnInvalidTimestamp PASSED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
extractMetadataTimestamp STARTED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
extractMetadataTimestamp PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

Re: [DISCUSS] KIP-378: Enable Dependency Injection for Kafka Streams handlers

2018-10-05 Thread John Roesler
Hi Wladimir,

Thanks for the KIP!

As I mentioned in the PR discussion, I personally prefer not to recommend
overriding StreamsConfig for this purpose.

It seems like a person wishing to create a DI shim would have to acquire
quite a deep understanding of the class and its usage to figure out what
exactly to override to accomplish their goals without breaking everything.
I'm honestly impressed with the method you came up with to create your
Spring/Streams shim.

I think we can make to path for the next person smoother by going with
something more akin to the ConfiguredStreamsFactory. This is a constrained
interface that tells you exactly what you have to implement to create such
a shim.

A few thoughts:
1. it seems like we can keep all the deprecated constructors still
deprecated

2. we could add just one additional constructor to each of KafkaStreams and
TopologyTestDriver to still take a Properties, but also your new
ConfiguredStreamsFactory

3. I don't know if I'm sold on the name ConfiguredStreamsFactory, since it
does not produce configured streams. Instead, it produces configured
instances... How about ConfiguredInstanceFactory?

4. if I understand the usage correctly, it's actually a pretty small number
of classes that we actually make via getConfiguredInstance. Offhand, I can
think of the key/value Serdes, the deserialization exception handler, and
the production exception handler.
Perhaps, instead of maintaining a generic "class instantiator", we could
explore a factory interface that just has methods for creating exactly the
kinds of things we need to create. In fact, we already have something like
this: org.apache.kafka.streams.KafkaClientSupplier . Do you think we could
just add some more methods to that interface (and maybe rename it) instead?

Thanks,
-John

On Fri, Oct 5, 2018 at 3:31 PM John Roesler  wrote:

> Hi Guozhang,
>
> I'm going to drop in a little extra context from the preliminary PR
> discussion (https://github.com/apache/kafka/pull/5344).
>
> The issue isn't that it's impossible to use Streams within a Spring app,
> just that the interplay between our style of construction/configuration and
> Spring's is somewhat awkward compared to the normal experience with
> dependency injection.
>
> I'm guessing users of dependency injection would not like the approach you
> offered. I believe it's commonly considered an antipattern when using DI
> frameworks to pass the injector directly into the class being constructed.
> Wladimir has also offered an alternative usage within the current framework
> of injecting pre-constructed dependencies into the Properties, and then
> retrieving and casting them inside the configured class.
>
> It seems like this KIP is more about offering a more elegant interface to
> DI users.
>
> One of the points that Wladimir raised on his PR discussion was the desire
> to configure the classes in a typesafe way in the constructor (thus
> allowing the use of immutable classes).
>
> With this KIP, it would be possible for a DI user to:
> 1. register a Streams-Spring or Streams-Guice (etc) "plugin" (via either
> of the mechanisms he proposed)
> 2. simply make the Serdes, exception handlers, etc, available on the class
> path with the DI annotations
> 3. start the app
>
> There's no need to mess with passing dependencies (or the injector)
> through the properties.
>
> Sorry for "injecting" myself into your discussion, but it took me a while
> in the PR discussion to get to the bottom of the issue, and I wanted to
> spare you the same.
>
> I'll respond separately with my feedback on the KIP.
>
> Thanks,
> -John
>
> On Sun, Sep 30, 2018 at 2:31 PM Guozhang Wang  wrote:
>
>> Hello Wladimir,
>>
>> Thanks for proposing the KIP. I think the injection can currently be done
>> by passing in the key/value pair directly into the properties which can
>> then be accessed from the `ProcessorContext#appConfigs` or
>> `#appConfigsWithPrefix`. For example, when constructing the properties you
>> can:
>>
>> ```
>> props.put(myProp1, myValue1);
>> props.put(myProp2, myValue1);
>> props.put("my_app_context", appContext);
>>
>> KafkaStreams myApp = new KafkaStreams(topology, props);
>>
>> // and then in your processor, on the processor where you want to
>> construct
>> the injected handler:
>>
>> Map appProps = processorContext.appConfigs();
>> ApplicationContext appContext = appProps.get("my_app_context");
>> MyHandler myHandler =
>> applicationContext.getBeanNamesForType(MyHandlerClassType);
>> ```
>>
>> Does that work for you?
>>
>> Guozhang
>>
>>
>> On Sun, Sep 30, 2018 at 6:56 AM, Dongjin Lee  wrote:
>>
>> > Hi Wladimir,
>> >
>> > Thanks for your great KIP. Let me have a look. And let's discuss this
>> KIP
>> > in depth after the release of 2.1.0. (The committers are very busy for
>> it.)
>> >
>> > Best,
>> > Dongjin
>> >
>> > On Sun, Sep 30, 2018 at 10:49 PM Wladimir Schmidt 
>> > wrote:
>> >
>> > > Dear colleagues,
>> > >
>> > > I am happy to inform you that I have just finished 

[jira] [Resolved] (KAFKA-6880) Zombie replicas must be fenced

2018-10-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6880.

   Resolution: Fixed
Fix Version/s: (was: 2.2.0)
   2.1.0

This was fixed in KAFKA-7395

> Zombie replicas must be fenced
> --
>
> Key: KAFKA-6880
> URL: https://issues.apache.org/jira/browse/KAFKA-6880
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> Let's say we have three replicas for a partition: 1, 2 ,and 3.
> In epoch 0, broker 1 is the leader and writes up to offset 50. Broker 2 
> replicates up to offset 50, but broker 3 is a little behind at offset 40. The 
> high watermark is 40. 
> Suppose that broker 2 has a zk session expiration event, but fails to detect 
> it or fails to reestablish a session (e.g. due to a bug like KAFKA-6879), and 
> it continues fetching from broker 1.
> For whatever reason, broker 3 is elected the leader for epoch 1 beginning at 
> offset 40. Broker 1 detects the leader change and truncates its log to offset 
> 40. Some new data is appended up to offset 60, which is fully replicated to 
> broker 1. Broker 2 continues fetching from broker 1 at offset 50, but gets 
> NOT_LEADER_FOR_PARTITION errors, which is retriable and hence broker 2 will 
> retry.
> After some time, broker 1 becomes the leader again for epoch 2. Broker 1 
> begins at offset 60. Broker 2 has not exhausted retries and is now able to 
> fetch at offset 50 and append the last 10 records in order to catch up. 
> However, because it did not observed the leader changes, it never saw the 
> need to truncate its log. Hence offsets 40-49 still reflect the uncommitted 
> changes from epoch 0. Neither KIP-101 nor KIP-279 can fix this because the 
> tail of the log is correct.
> The basic problem is that zombie replicas are not fenced properly by the 
> leader epoch. We actually observed a sequence roughly like this after a 
> broker had partially deadlocked from KAFKA-6879. We should add the leader 
> epoch to fetch requests so that the leader can fence the zombie replicas.
> A related problem is that we currently allow such zombie replicas to be added 
> to the ISR even if they are in an offline state. The problem is that the 
> controller will never elect them, so being part of the ISR does not give the 
> availability guarantee that is intended. This would also be fixed by 
> verifying replica leader epoch in fetch requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7395) Add fencing to replication protocol (KIP-320)

2018-10-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7395.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Add fencing to replication protocol (KIP-320)
> -
>
> Key: KAFKA-7395
> URL: https://issues.apache.org/jira/browse/KAFKA-7395
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.1.0
>
>
> This patch implements the broker-side changes to support fencing improvements 
> from KIP-320: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-320%3A+Allow+fetchers+to+detect+and+handle+log+truncation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-378: Enable Dependency Injection for Kafka Streams handlers

2018-10-05 Thread John Roesler
Hi Guozhang,

I'm going to drop in a little extra context from the preliminary PR
discussion (https://github.com/apache/kafka/pull/5344).

The issue isn't that it's impossible to use Streams within a Spring app,
just that the interplay between our style of construction/configuration and
Spring's is somewhat awkward compared to the normal experience with
dependency injection.

I'm guessing users of dependency injection would not like the approach you
offered. I believe it's commonly considered an antipattern when using DI
frameworks to pass the injector directly into the class being constructed.
Wladimir has also offered an alternative usage within the current framework
of injecting pre-constructed dependencies into the Properties, and then
retrieving and casting them inside the configured class.

It seems like this KIP is more about offering a more elegant interface to
DI users.

One of the points that Wladimir raised on his PR discussion was the desire
to configure the classes in a typesafe way in the constructor (thus
allowing the use of immutable classes).

With this KIP, it would be possible for a DI user to:
1. register a Streams-Spring or Streams-Guice (etc) "plugin" (via either of
the mechanisms he proposed)
2. simply make the Serdes, exception handlers, etc, available on the class
path with the DI annotations
3. start the app

There's no need to mess with passing dependencies (or the injector) through
the properties.

Sorry for "injecting" myself into your discussion, but it took me a while
in the PR discussion to get to the bottom of the issue, and I wanted to
spare you the same.

I'll respond separately with my feedback on the KIP.

Thanks,
-John

On Sun, Sep 30, 2018 at 2:31 PM Guozhang Wang  wrote:

> Hello Wladimir,
>
> Thanks for proposing the KIP. I think the injection can currently be done
> by passing in the key/value pair directly into the properties which can
> then be accessed from the `ProcessorContext#appConfigs` or
> `#appConfigsWithPrefix`. For example, when constructing the properties you
> can:
>
> ```
> props.put(myProp1, myValue1);
> props.put(myProp2, myValue1);
> props.put("my_app_context", appContext);
>
> KafkaStreams myApp = new KafkaStreams(topology, props);
>
> // and then in your processor, on the processor where you want to construct
> the injected handler:
>
> Map appProps = processorContext.appConfigs();
> ApplicationContext appContext = appProps.get("my_app_context");
> MyHandler myHandler =
> applicationContext.getBeanNamesForType(MyHandlerClassType);
> ```
>
> Does that work for you?
>
> Guozhang
>
>
> On Sun, Sep 30, 2018 at 6:56 AM, Dongjin Lee  wrote:
>
> > Hi Wladimir,
> >
> > Thanks for your great KIP. Let me have a look. And let's discuss this KIP
> > in depth after the release of 2.1.0. (The committers are very busy for
> it.)
> >
> > Best,
> > Dongjin
> >
> > On Sun, Sep 30, 2018 at 10:49 PM Wladimir Schmidt 
> > wrote:
> >
> > > Dear colleagues,
> > >
> > > I am happy to inform you that I have just finished my first KIP
> > > (KIP-378: Enable Dependency Injection for Kafka Streams handlers
> > > <
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 378%3A+Enable+Dependency+Injection+for+Kafka+Streams+handlers
> > > >).
> > >
> > > Your feedback on this submission would be highly appreciated.
> > >
> > > Best Regards,
> > > Wladimir Schmidt
> > >
> >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> > *github:  github.com/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > slideshare:
> > www.slideshare.net/dongjinleekr
> > *
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-7487) DumpLogSegments reports mismatches for indexed offsets which are not at the start of a record batch

2018-10-05 Thread Michael Bingham (JIRA)
Michael Bingham created KAFKA-7487:
--

 Summary: DumpLogSegments reports mismatches for indexed offsets 
which are not at the start of a record batch
 Key: KAFKA-7487
 URL: https://issues.apache.org/jira/browse/KAFKA-7487
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.0.0
Reporter: Michael Bingham


When running {{DumpLogSegments}} against an {{.index file}}, mismatches may be 
reported when the indexed message offset is not the first record in a batch. 
For example:

{code}
 Mismatches in 
:/var/lib/kafka/data/replicated-topic-0/.index
 Index offset: 968, log offset: 966
{code}

And looking at the corresponding {{.log}} file:

{code}
baseOffset: 966 lastOffset: 968 count: 3 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 3952771 CreateTime: 1538768639065 isvalid: true size: 12166 magic: 2 
compresscodec: NONE crc: 294402254 
{code}

In this case, the last offset in the batch was indexed instead of the first, 
but the index has to map physical position to the start of the batch, leading 
to the mismatch.

It seems like {{DumpLogSegments}} should not report these cases as mismatches, 
which users might interpret as an error or problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-05 Thread Lucas Wang
Thanks for the suggestion, Ismael. I like it.

Jun,
I'm excited to get the +1, thanks a lot!
Meanwhile what do you feel about renaming the metrics and config to

ControlPlaneRequestQueueSize

ControlPlaneNetworkProcessorIdlePercent

ControlPlaneRequestHandlerIdlePercent

control.plane.listener.name

?


Thanks,

Lucas

On Thu, Oct 4, 2018 at 11:38 AM Ismael Juma  wrote:

> Have we considered control plane if we think control by itself is
> ambiguous? I agree with the original concern that "controller" may be
> confusing for something that affects all brokers.
>
> Ismael
>
>
> On 4 Oct 2018 11:08 am, "Lucas Wang"  wrote:
>
> Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
> Please take a look again when you have time.
>
> Regards,
> Lucas
>
>
> On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:
>
> > Hi, Lucas,
> >
> > 200. That's a valid concern. So, we can probably just keep the current
> > name.
> >
> > 201. I am thinking that you would upgrade in the same way as changing
> > inter.broker.listener.name. This requires 2 rounds of rolling restart.
> In
> > the first round, we add the controller endpoint to the listeners w/o
> > setting controller.listener.name. In the second round, every broker sets
> > controller.listener.name. At that point, the controller listener is
> ready
> > in every broker.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang 
> wrote:
> >
> > > Thanks for the further comments, Jun.
> > >
> > > 200. Currently in the code base, we have the term of "ControlBatch"
> > related
> > > to
> > > idempotent/transactional producing. Do you think it's a concern for
> > reusing
> > > the term "control"?
> > >
> > > 201. It's not clear to me how it would work by following the same
> > strategy
> > > for "controller.listener.name".
> > > Say the new controller has its "controller.listener.name" set to the
> > value
> > > "CONTROLLER", and broker 1
> > > has picked up this KIP by announcing
> > > "endpoints": [
> > > "CONTROLLER://broker1.example.com:9091",
> > > "INTERNAL://broker1.example.com:9092",
> > > "EXTERNAL://host1.example.com:9093"
> > > ],
> > >
> > > while broker2 has not picked up the change, and is announcing
> > > "endpoints": [
> > > "INTERNAL://broker2.example.com:9092",
> > > "EXTERNAL://host2.example.com:9093"
> > > ],
> > > to support both broker 1 for the new behavior and broker 2 for the old
> > > behavior, it seems the controller must
> > > check their published endpoints. Am I missing something?
> > >
> > > Thanks!
> > > Lucas
> > >
> > > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
> > >
> > > > Hi, Lucas,
> > > >
> > > > Sorry for the delay. The updated wiki looks good to me overall. Just
> a
> > > > couple more minor comments.
> > > >
> > > > 200.
> kafka.network:name=ControllerRequestQueueSize,type=RequestChannel:
> > > The
> > > > name ControllerRequestQueueSize gives the impression that it's only
> for
> > > the
> > > > controller broker. Perhaps we can just rename all metrics and configs
> > > from
> > > > controller to control. This indicates that the threads and the queues
> > are
> > > > for the control requests (as oppose to data requests).
> > > >
> > > > 201. ": In this scenario, the controller
> > will
> > > > have the "controller.listener.name" config set to a value like
> > > > "CONTROLLER", however the broker's exposed endpoints do not have an
> > entry
> > > > corresponding to the new listener name. Hence the controller should
> > > > preserve the existing behavior by determining the endpoint using
> > > > *inter-broker-listener-name *value. The end result should be the same
> > > > behavior as today." Currently, the controller makes connections based
> > on
> > > > its local inter.broker.listener.name config without checking the
> > target
> > > > broker's ZK registration. For consistency, perhaps we can just follow
> > the
> > > > same strategy for controller.listener.name. This existing behavior
> > seems
> > > > simpler to understand and has the benefit of catching inconsistent
> > > configs
> > > > across brokers.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang 
> > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Sorry to bother you again. Can you please take a look at the wiki
> > again
> > > > > when you have time?
> > > > >
> > > > > Thanks a lot!
> > > > > Lucas
> > > > >
> > > > > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> > > > wrote:
> > > > >
> > > > > > Hi Jun,
> > > > > >
> > > > > > Thanks a lot for the detailed explanation.
> > > > > > I've restored the wiki to a previous version that does not
> require
> > > > config
> > > > > > changes,
> > > > > > and keeps the current behavior with the proposed changes turned
> off
> > > by
> > > > > > default.
> > > > > > I'd appreciate it if you can review it again.
> > > > > >
> > > > > > Thanks!
> > > > > > Lucas
> > > > > >
> > > > > > On 

Re: New release branch 2.1.0

2018-10-05 Thread Tirtha Chatterjee
Hi

When are we planning on releasing the bugfix version 2.0.1 ?

On Thu, Oct 4, 2018 at 5:22 PM Dong Lin  wrote:

> Hello Kafka developers and users,
>
> As promised in the release plan
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044,
> we now have a release branch for 2.1.0 release. Trunk will soon be bumped
> to 2.2.0-SNAPSHOT.
>
> I'll be going over the JIRAs to move every non-blocker from this release to
> the next release.
>
> From this point, most changes should go to trunk. Blockers (existing and
> new that we discover while testing the release) will be double-committed.
> Please discuss with your reviewer whether your PR should go to trunk or to
> trunk+release so they can merge accordingly.
>
> Please help us test the release!
>
> Thanks!
> Dong
>


-- 
Regards
Tirtha Chatterjee


Build failed in Jenkins: kafka-1.0-jdk7 #246

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Switch to use AWS spot instances

--
[...truncated 376.02 KB...]
kafka.controller.ZookeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.controller.ZookeeperClientTest > testSetDataExistingZNode STARTED

kafka.controller.ZookeeperClientTest > testSetDataExistingZNode PASSED

kafka.controller.ZookeeperClientTest > testSetACLNonExistentZNode STARTED

kafka.controller.ZookeeperClientTest > testSetACLNonExistentZNode PASSED

kafka.controller.ZookeeperClientTest > testMixedPipeline STARTED

kafka.controller.ZookeeperClientTest > testMixedPipeline PASSED

kafka.controller.ZookeeperClientTest > testGetDataExistingZNode STARTED

kafka.controller.ZookeeperClientTest > testGetDataExistingZNode PASSED

kafka.controller.ZookeeperClientTest > testGetACLExistingZNode STARTED

kafka.controller.ZookeeperClientTest > testGetACLExistingZNode PASSED

kafka.controller.ZookeeperClientTest > testDeleteExistingZNode STARTED

kafka.controller.ZookeeperClientTest > testDeleteExistingZNode PASSED

kafka.controller.ZookeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.controller.ZookeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.controller.ZookeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.controller.ZookeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.controller.ZookeeperClientTest > testExistsExistingZNode STARTED

kafka.controller.ZookeeperClientTest > testExistsExistingZNode PASSED

kafka.controller.ZookeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.controller.ZookeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.controller.ZookeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.controller.ZookeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 

Re: [VOTE] KIP-376: Implement AutoClosable on appropriate classes that want to be used in a try-with-resource statement

2018-10-05 Thread Yishun Guan
Hi,

I think we have discussed this well enough to put this into a vote.

Suggestions are welcome!

Best,
Yishun

On Wed, Oct 3, 2018, 2:30 PM Yishun Guan  wrote:

> Hi All,
>
> I want to start a voting on this KIP:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
>
> Here is the discussion thread:
>
> https://lists.apache.org/thread.html/9f6394c28d3d11a67600d5d7001e8aaa318f1ad497b50645654bbe3f@%3Cdev.kafka.apache.org%3E
>
> Thanks,
> Yishun
>


[jira] [Resolved] (KAFKA-6914) Kafka Connect - Plugins class should have a constructor that can take in parent ClassLoader

2018-10-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6914.

   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1
   1.1.2
   1.0.3
   0.11.0.4

> Kafka Connect - Plugins class should have a constructor that can take in 
> parent ClassLoader
> ---
>
> Key: KAFKA-6914
> URL: https://issues.apache.org/jira/browse/KAFKA-6914
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Sriram KS
>Assignee: Konstantine Karantasis
>Priority: Minor
> Fix For: 0.11.0.4, 1.0.3, 1.1.2, 2.0.1, 2.1.0
>
>
> Currently Plugins class has a single constructor that takes in map of props.
> Please make Plugin class to have a constructor that takes in a classLoader as 
> well and use it to set DelegationClassLoader's parent classLoader.
> Reason:
> This will be useful if i am already having a managed class Loader environment 
> like a Spring boot app which resolves my class dependencies using my 
> maven/gradle dependency management.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7467) NoSuchElementException is raised because controlBatch is empty

2018-10-05 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7467.

   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1
   1.1.2
   1.0.3

> NoSuchElementException is raised because controlBatch is empty
> --
>
> Key: KAFKA-7467
> URL: https://issues.apache.org/jira/browse/KAFKA-7467
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Badai Aqrandista
>Assignee: Bob Barrett
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.1, 2.1.0
>
>
> Somehow, log cleaner died because of NoSuchElementException when it calls 
> onControlBatchRead:
> {noformat}
> [2018-09-25 14:18:31,088] INFO Cleaner 0: Cleaning segment 0 in log 
> __consumer_offsets-45 (largest timestamp Fri Apr 27 16:12:39 CDT 2018) into 
> 0, discarding deletes. (kafka.log.LogCleaner)
> [2018-09-25 14:18:31,092] ERROR [kafka-log-cleaner-thread-0]: Error due to 
> (kafka.log.LogCleaner)
> java.util.NoSuchElementException
>   at java.util.Collections$EmptyIterator.next(Collections.java:4189)
>   at 
> kafka.log.CleanedTransactionMetadata.onControlBatchRead(LogCleaner.scala:945)
>   at 
> kafka.log.Cleaner.kafka$log$Cleaner$$shouldDiscardBatch(LogCleaner.scala:636)
>   at kafka.log.Cleaner$$anon$5.checkBatchRetention(LogCleaner.scala:573)
>   at 
> org.apache.kafka.common.record.MemoryRecords.filterTo(MemoryRecords.java:157)
>   at 
> org.apache.kafka.common.record.MemoryRecords.filterTo(MemoryRecords.java:138)
>   at kafka.log.Cleaner.cleanInto(LogCleaner.scala:604)
>   at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
>   at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
>   at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
>   at kafka.log.Cleaner.clean(LogCleaner.scala:438)
>   at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
>   at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> [2018-09-25 14:18:31,093] INFO [kafka-log-cleaner-thread-0]: Stopped 
> (kafka.log.LogCleaner)
> {noformat}
> The following code does not seem to expect the controlBatch to be empty:
> [https://github.com/apache/kafka/blob/1.1/core/src/main/scala/kafka/log/LogCleaner.scala#L946]
> {noformat}
>   def onControlBatchRead(controlBatch: RecordBatch): Boolean = {
> consumeAbortedTxnsUpTo(controlBatch.lastOffset)
> val controlRecord = controlBatch.iterator.next()
> val controlType = ControlRecordType.parse(controlRecord.key)
> val producerId = controlBatch.producerId
> {noformat}
> MemoryRecords.filterTo copies the original control attribute for empty 
> batches, which results in empty control batches. Trying to read the control 
> type of an empty batch causes the error.
> {noformat}
>   else if (batchRetention == BatchRetention.RETAIN_EMPTY) {
> if (batchMagic < RecordBatch.MAGIC_VALUE_V2)
> throw new IllegalStateException("Empty batches are only supported for 
> magic v2 and above");
> 
> bufferOutputStream.ensureRemaining(DefaultRecordBatch.RECORD_BATCH_OVERHEAD);
> DefaultRecordBatch.writeEmptyHeader(bufferOutputStream.buffer(), 
> batchMagic, batch.producerId(),
> batch.producerEpoch(), batch.baseSequence(), batch.baseOffset(), 
> batch.lastOffset(),
> batch.partitionLeaderEpoch(), batch.timestampType(), 
> batch.maxTimestamp(),
> batch.isTransactional(), batch.isControlBatch());
> filterResult.updateRetainedBatchMetadata(batch, 0, true);
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6990) CommitFailedException; this task may be no longer owned by the thread

2018-10-05 Thread Ryanne Dolan (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryanne Dolan resolved KAFKA-6990.
-
Resolution: Not A Bug

> CommitFailedException; this task may be no longer owned by the thread
> -
>
> Key: KAFKA-6990
> URL: https://issues.apache.org/jira/browse/KAFKA-6990
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1
>Reporter: Eugen Feller
>Priority: Blocker
>
> We are seeing a lot of CommitFailedExceptions on one of our Kafka stream 
> apps. Running Kafka Streams 0.11.0.1 and Kafka Broker 0.10.2.1. Full error 
> message:
> {code:java}
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] INFO 
> org.apache.kafka.streams.KafkaStreams - stream-client 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d] State transition from 
> REBALANCING to RUNNING.
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_0] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-0=OffsetAndMetadata{offset=11, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-0=OffsetAndMetadata{offset=26,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_0 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_1 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_1] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-1=OffsetAndMetadata{offset=38,
>  metadata=''}, sightings_sighting_byclientmac_0-1=OffsetAndMetadata{offset=1, 
> metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_2] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-2=OffsetAndMetadata{offset=8,
>  metadata=''}, 
> sightings_sighting_byclientmac_0-2=OffsetAndMetadata{offset=24, metadata=''}} 
> due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_2 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_3] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-3=OffsetAndMetadata{offset=21, 
> metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-3=OffsetAndMetadata{offset=102,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_3 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_4] Failed 
> offset commits 
> {sightings_sighting_byclientmac_0-4=OffsetAndMetadata{offset=5, metadata=''}, 
> mactocontactmappings_contactmapping_byclientmac_0-4=OffsetAndMetadata{offset=20,
>  metadata=''}} due to CommitFailedException
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.AssignedTasks - stream-thread 
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] Failed to commit 
> stream task 0_4 during commit state due to CommitFailedException; this task 
> may be no longer owned by the thread
> [visits-bdb209f1-c9dd-404b-9233-064f2fb4227d-StreamThread-1] WARN 
> org.apache.kafka.streams.processor.internals.StreamTask - task [0_5] Failed 
> offset commits 
> {mactocontactmappings_contactmapping_byclientmac_0-5=OffsetAndMetadata{offset=26,
>  metadata=''}, 
> 

Re: [DISCUSS] Replacing EasyMock with Mockito in Kafka

2018-10-05 Thread Ted Yu
+1 to moving to Mockito

On Fri, Oct 5, 2018 at 12:11 PM Ron Dagostino  wrote:

> I have used Mockito and am a big fan -- I had never used EasyMock until
> recently.  The concept of record vs. replay mode in EasyMock really annoyed
> me -- I'm a fan of the "NO MODES" idea (
> https://en.wikipedia.org/wiki/Larry_Tesler), and when I encountered it in
> EasyMock in the context of Kafka development I just gritted my teeth and
> learned/used it as required.  I'm very pleased that Mockito is being
> seriously considered as a replacement.  Mockito  started as an EasyMock
> fork, I guess, and the project readily admits it is standing on the
> shoulders of giants in many respects, but I think one of the best things
> they ever did was jettison the concept of modes.
>
> +1 from me.
>
> Ron
>
> On Fri, Oct 5, 2018 at 10:00 AM Ismael Juma  wrote:
>
> > So far, it seems like people are in favour. If no objections are
> presented
> > in the next couple of days, we will go ahead with the change.
> >
> > Ismael
> >
> > On Sun, 30 Sep 2018, 20:32 Ismael Juma,  wrote:
> >
> > > Hi all,
> > >
> > > As described in KAFKA-7438
> > > , EasyMock's
> > > development has stagnated. This presents a number of issues:
> > >
> > > 1. Blocks us from running tests with newer Java versions, which is a
> > > frequent occurrence give the new Java release cadence. It is the main
> > > blocker in switching Jenkins from Java 10 to Java 11 at the moment.
> > > 2. Integration with newer testing libraries like JUnit 5 is slow to
> > appear
> > > (if it appears at all).
> > > 3. No API improvements. Mockito started as an EasyMock fork, but has
> > > continued to evolve and, in my opinion, it's more intuitive now.
> > >
> > > I think we should switch to Mockito for new tests and to incrementally
> > > migrate the existing ones as time allows. To make the proposal
> concrete,
> > I
> > > went ahead and converted all the tests in the `clients` module:
> > >
> > > https://github.com/apache/kafka/pull/5691
> > >
> > > I think the updated tests are nicely readable. I also removed PowerMock
> > > from the `clients` tests as we didn't really need it and its
> development
> > > has also stagnated a few months ago. I think we can easily remove
> > PowerMock
> > > elsewhere with the exception of `Connect` where we may need to keep it
> > for
> > > a while.
> > >
> > > Let me know your thoughts. Aside from the general future direction, I'd
> > > like to get the PR for KAFKA-7439 reviewed and merged soonish as merge
> > > conflicts will creep in quickly.
> > >
> > > Ismael
> > >
> >
>


Re: [DISCUSS] Replacing EasyMock with Mockito in Kafka

2018-10-05 Thread Ismael Juma
Agreed, I really liked the fact that Mockito has no "replay" concept.

Ismael

On Fri, 5 Oct 2018, 12:11 Ron Dagostino,  wrote:

> I have used Mockito and am a big fan -- I had never used EasyMock until
> recently.  The concept of record vs. replay mode in EasyMock really annoyed
> me -- I'm a fan of the "NO MODES" idea (
> https://en.wikipedia.org/wiki/Larry_Tesler), and when I encountered it in
> EasyMock in the context of Kafka development I just gritted my teeth and
> learned/used it as required.  I'm very pleased that Mockito is being
> seriously considered as a replacement.  Mockito  started as an EasyMock
> fork, I guess, and the project readily admits it is standing on the
> shoulders of giants in many respects, but I think one of the best things
> they ever did was jettison the concept of modes.
>
> +1 from me.
>
> Ron
>
> On Fri, Oct 5, 2018 at 10:00 AM Ismael Juma  wrote:
>
> > So far, it seems like people are in favour. If no objections are
> presented
> > in the next couple of days, we will go ahead with the change.
> >
> > Ismael
> >
> > On Sun, 30 Sep 2018, 20:32 Ismael Juma,  wrote:
> >
> > > Hi all,
> > >
> > > As described in KAFKA-7438
> > > , EasyMock's
> > > development has stagnated. This presents a number of issues:
> > >
> > > 1. Blocks us from running tests with newer Java versions, which is a
> > > frequent occurrence give the new Java release cadence. It is the main
> > > blocker in switching Jenkins from Java 10 to Java 11 at the moment.
> > > 2. Integration with newer testing libraries like JUnit 5 is slow to
> > appear
> > > (if it appears at all).
> > > 3. No API improvements. Mockito started as an EasyMock fork, but has
> > > continued to evolve and, in my opinion, it's more intuitive now.
> > >
> > > I think we should switch to Mockito for new tests and to incrementally
> > > migrate the existing ones as time allows. To make the proposal
> concrete,
> > I
> > > went ahead and converted all the tests in the `clients` module:
> > >
> > > https://github.com/apache/kafka/pull/5691
> > >
> > > I think the updated tests are nicely readable. I also removed PowerMock
> > > from the `clients` tests as we didn't really need it and its
> development
> > > has also stagnated a few months ago. I think we can easily remove
> > PowerMock
> > > elsewhere with the exception of `Connect` where we may need to keep it
> > for
> > > a while.
> > >
> > > Let me know your thoughts. Aside from the general future direction, I'd
> > > like to get the PR for KAFKA-7439 reviewed and merged soonish as merge
> > > conflicts will creep in quickly.
> > >
> > > Ismael
> > >
> >
>


Re: [DISCUSS] Replacing EasyMock with Mockito in Kafka

2018-10-05 Thread Ron Dagostino
I have used Mockito and am a big fan -- I had never used EasyMock until
recently.  The concept of record vs. replay mode in EasyMock really annoyed
me -- I'm a fan of the "NO MODES" idea (
https://en.wikipedia.org/wiki/Larry_Tesler), and when I encountered it in
EasyMock in the context of Kafka development I just gritted my teeth and
learned/used it as required.  I'm very pleased that Mockito is being
seriously considered as a replacement.  Mockito  started as an EasyMock
fork, I guess, and the project readily admits it is standing on the
shoulders of giants in many respects, but I think one of the best things
they ever did was jettison the concept of modes.

+1 from me.

Ron

On Fri, Oct 5, 2018 at 10:00 AM Ismael Juma  wrote:

> So far, it seems like people are in favour. If no objections are presented
> in the next couple of days, we will go ahead with the change.
>
> Ismael
>
> On Sun, 30 Sep 2018, 20:32 Ismael Juma,  wrote:
>
> > Hi all,
> >
> > As described in KAFKA-7438
> > , EasyMock's
> > development has stagnated. This presents a number of issues:
> >
> > 1. Blocks us from running tests with newer Java versions, which is a
> > frequent occurrence give the new Java release cadence. It is the main
> > blocker in switching Jenkins from Java 10 to Java 11 at the moment.
> > 2. Integration with newer testing libraries like JUnit 5 is slow to
> appear
> > (if it appears at all).
> > 3. No API improvements. Mockito started as an EasyMock fork, but has
> > continued to evolve and, in my opinion, it's more intuitive now.
> >
> > I think we should switch to Mockito for new tests and to incrementally
> > migrate the existing ones as time allows. To make the proposal concrete,
> I
> > went ahead and converted all the tests in the `clients` module:
> >
> > https://github.com/apache/kafka/pull/5691
> >
> > I think the updated tests are nicely readable. I also removed PowerMock
> > from the `clients` tests as we didn't really need it and its development
> > has also stagnated a few months ago. I think we can easily remove
> PowerMock
> > elsewhere with the exception of `Connect` where we may need to keep it
> for
> > a while.
> >
> > Let me know your thoughts. Aside from the general future direction, I'd
> > like to get the PR for KAFKA-7439 reviewed and merged soonish as merge
> > conflicts will creep in quickly.
> >
> > Ismael
> >
>


Jenkins build is back to normal : kafka-0.11.0-jdk7 #406

2018-10-05 Thread Apache Jenkins Server
See 




Re: [EXTERNAL] Incremental Cooperative Rebalancing

2018-10-05 Thread Guozhang Wang
Hello Konstantine,

Thanks for the great write-up! Here are a few quick comments I have about
the proposals:

1. For "Kubernetes process death" and "Rolling bounce" case, there is
another parallel work on KIP-345 [1] (cc'ed contributor) that is aimed to
mitigate these two issues, but it is relying on the fact that we can
disable sending "leave group" request immediately on shutting down. Ideally
if KIP-345 works well for these cases, then Simple Cooperative Rebalancing
itself along with KIP-345 should cover most of the scenarios we've
described in the wiki. In addition, Delayed / Incremental Imbalance
approach can be done incrementally on top of Simple approach, so execution
wise I'd suggest we start with the Simple approach and observe how well it
works in practice (especially with K8s etc frameworks) before deciding if
we should go further and implemented the more complicated ones.

2. For the "events" section, I think it may worth mentioning if there are
any new client / coordinator failure events that need to be addressed with
the new protocol, as we listed in the original design [2] [3]. For example,
what if the leader received different client or resource listings during
two consecutive rebalances?

3. It's worth mentioning what are the key ideas in the updated protocol:

3.a) In the original protocol we require every member to revoke every
resource before joining the group, which can then be used as the
"synchronization barrier" and hence it does not matter for clients to
receive assignment at different point in time; in the new protocol we do
not require members to revoke everything, but instead leveraging on the
leader who has the "global picture" to make sure that there are no
conflicts between those shared resources, a.k.a as the synchronization
barrier.
3.b) The new fields in the Assigned / RevokedPartitions fields in the
responses are now "deltas" instead of "overwrites" to the consumers. Any
modules relying on it, e.g. Streams who relies on ConsumerCoordinator,
needs to adjust their code (PartitionAssignor) correspondingly to
incorporate this semantic changes.

4. I've added a child page under yours for illustrating the implications
for Streams on rebalance cost reduction [4], since for Streams one key
characteristics is that standby tasks exist to help with rebalance incurred
unavailability, and hence need to be considered upfront how Streams should
leverage on the new protocol along with standby tasks to achieve the better
operational goals during rebalances.


[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Reduce+multiple+consumer+rebalances+by+specifying+member+id
[2]
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal#KafkaClient-sideAssignmentProposal-CoordinatorStateMachine
[3]
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design#Kafka0.9ConsumerRewriteDesign-Interestingscenariostoconsider
[4]
https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing+for+Streams


On Thu, Oct 4, 2018 at 12:16 PM, McCaig, Rhys 
wrote:

> This is fantastic. Im really excited to see the work on this.
>
> > On Oct 2, 2018, at 4:22 PM, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
> >
> > Hey everyone,
> >
> > I'd like to bring to your attention a general design document that was
> just
> > published in Apache Kafka's wiki space:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/Incrementa
> l+Cooperative+Rebalancing%3A+Support+and+Policies
> >
> > It deals with the subject of Rebalancing of groups in Kafka and proposes
> > basic infrastructure to support improvements on the current rebalancing
> > protocol as well as a set of policies that can be implemented to optimize
> > rebalancing under a number of real-world scenarios.
> >
> > Currently, this wiki page is meant to serve as a reference to the
> > proposition of Incremental Cooperative Rebalancing overall. Specific KIPs
> > will follow in order to describe in more detail - using the standard KIP
> > format - the basic infrastructure and the first policies that will be
> > proposed for implementation in components such as Connect, the Kafka
> > Consumer and Streams.
> >
> > Stay tuned!
> > Konstantine
>
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-7486) Flaky test `DeleteTopicTest.testAddPartitionDuringDeleteTopic`

2018-10-05 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7486:
--

 Summary: Flaky test 
`DeleteTopicTest.testAddPartitionDuringDeleteTopic`
 Key: KAFKA-7486
 URL: https://issues.apache.org/jira/browse/KAFKA-7486
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


Starting to see more of this recently:
{code}
10:06:28 kafka.admin.DeleteTopicTest > testAddPartitionDuringDeleteTopic FAILED
10:06:28 kafka.admin.AdminOperationException: 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /brokers/topics/test
10:06:28 at 
kafka.zk.AdminZkClient.writeTopicPartitionAssignment(AdminZkClient.scala:162)
10:06:28 at 
kafka.zk.AdminZkClient.createOrUpdateTopicPartitionAssignmentPathInZK(AdminZkClient.scala:102)
10:06:28 at 
kafka.zk.AdminZkClient.addPartitions(AdminZkClient.scala:229)
10:06:28 at 
kafka.admin.DeleteTopicTest.testAddPartitionDuringDeleteTopic(DeleteTopicTest.scala:266)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-10-05 Thread Colin McCabe
On Fri, Oct 5, 2018, at 10:58, Thomas Becker wrote:
> Colin,
> Would you mind sharing your vision for how this looks with multiple 
> consumers? I'm still getting my bearings with the new consumer but it's 
> not immediately obvious to me how this would work.

Hi Thomas,

I was just responding to the general idea that you would have some kind of 
control topic that you wanted to read with very low latency, and some kind of 
set of data topics where the latency requirements are less strict.  In that 
case, you can just have two consumers: one for the low-latency topic, and one 
for the less low-latency topics.

There's a lot of things in this picture that are unclear.  Does the data in one 
set of topics have any relation to the data in the other?  Why do we want a 
control channel distinct from the data channel?  That's why I asked for 
clarification on the use-case.

> In particular, it doesn't seem particularly easy to know when you are at the 
> high 
> watermark of a topic.

KafkaConsumer#committed will return the last committed offset for a partition.  
However, I'm not sure I understand why you want this information in this case-- 
can you expand a bit on this?

best,
Colin


> 
> -Tommy
> 
> On Mon, 2018-10-01 at 13:43 -0700, Colin McCabe wrote:
> 
> Hi all,
> 
> 
> I feel like the DISCUSS thread didn't really come to a conclusion, so a 
> vote would be premature here.
> 
> 
> In particular, I still don't really understand the use-case for this 
> feature.  Can someone give a concrete scenario where you would need 
> this?  The control plane / data plane example that is listed in the KIP 
> doesn't require this feature.  You can just have one consumer for the 
> control plane, and one for the data plane, and do priority that way.  
> The discussion feels kind of unfocused since we haven't identified even 
> one concrete use-case that needs this feature.
> 
> 
> Unfortunately, this is a feature which consumes server-side memory.  We 
> have to store the priorities somehow when doing incremental fetch 
> requests.  If we go with an int as suggested, then this is at least 4 
> bytes per partition per incremental fetch request.  It also makes it 
> more complex and potentially slower to maintain the linked list of 
> partitions in the fetch requests.  Before we think about this, I'd like 
> to have a concrete use-case in mind, so that we can evaluate the costs 
> versus benefits.
> 
> 
> best,
> 
> Colin
> 
> 
> 
> On Mon, Oct 1, 2018, at 07:47, Dongjin Lee wrote:
> 
> Great. +1 (non-binding)
> 
> 
> On Mon, Oct 1, 2018 at 4:23 AM Matthias J. Sax 
> mailto:matth...@confluent.io>>
> 
> wrote:
> 
> 
> +1 (binding)
> 
> 
> As Dongjin pointed out, the community is working on upcoming 2.1
> 
> release, and thus it might take some time until people find time to
> 
> follow up on this an vote.
> 
> 
> 
> -Matthias
> 
> 
> On 9/30/18 11:11 AM, n...@afshartous.com wrote:
> 
> 
> On Sep 30, 2018, at 5:16 AM, Dongjin Lee 
> mailto:dong...@apache.org>> wrote:
> 
> 
> 1. Your KIP document
> 
> <
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
> 
> <
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
> 
> 
> lacks hyperlink to the discussion thread. And I couldn`t find the
> 
> discussion thread from the mailing archive.
> 
> 
> 
> Hi Dongjin,
> 
> 
> There has been a discussion thread.  I added this link as a reference
> 
> 
>   https://lists.apache.org/list.html?dev@kafka.apache.org:lte=1M:kip-349
> 
> 
> 
> 
> to the KIP-349 page
> 
> 
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics
> 
> <
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-349:+Priorities+for+Source+Topics
> 
> 
> 
> Best,
> 
> --
> 
>   Nick
> 
> 
> 
> 
> 
> 
> --
> 
> *Dongjin Lee*
> 
> 
> *A hitchhiker in the mathematical world.*
> 
> 
> *github:  github.com/dongjinleekr
> 
> linkedin: kr.linkedin.com/in/dongjinleekr
> 
> slideshare:
> 
> www.slideshare.net/dongjinleekr
> 
> *
> 
> 
> 
> This email and any attachments may contain confidential and privileged 
> material for the sole use of the intended recipient. Any review, 
> copying, or distribution of this email (or any attachments) by others is 
> prohibited. If you are not the intended recipient, please contact the 
> sender immediately and permanently delete this email and any 
> attachments. No employee or agent of TiVo Inc. is authorized to conclude 
> any binding agreement on behalf of TiVo Inc. by email. Binding 
> agreements with TiVo Inc. may only be made by a signed written 
> agreement.


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-10-05 Thread Colin McCabe
On Wed, Oct 3, 2018, at 16:01, n...@afshartous.com wrote:
> 
> 
> > On Oct 3, 2018, at 12:41 PM, Colin McCabe  wrote:
> > 
> > Will there be a separate code path for people who don't want to use this 
> > feature?
> 
> 
> Yes, I tried to capture this in the KIP by indicating that this API 
> change is 100% backwards compatible.  Current consumer semantics and 
> performance would be unaffected.  

Hi Nick,

Sorry if I was unclear.  It's possible for the change to be 100% backwards 
compatible, but still not have a separate code path for people who don't want 
to use this feature, right?  What I am getting at is basically: will this 
feature increase broker-side memory consumption for people who don't use it?

best,
Colin


[jira] [Created] (KAFKA-7485) Flaky test `DyanamicBrokerReconfigurationTest.testTrustStoreAlter`

2018-10-05 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7485:
--

 Summary: Flaky test 
`DyanamicBrokerReconfigurationTest.testTrustStoreAlter`
 Key: KAFKA-7485
 URL: https://issues.apache.org/jira/browse/KAFKA-7485
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
09:53:53 
09:53:53 kafka.server.DynamicBrokerReconfigurationTest > testTrustStoreAlter 
FAILED
09:53:53 org.apache.kafka.common.errors.SslAuthenticationException: SSL 
handshake failed
09:53:53 
09:53:53 Caused by:
09:53:53 javax.net.ssl.SSLProtocolException: Handshake message sequence 
violation, 2
09:53:53 at 
java.base/sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1611)
09:53:53 at 
java.base/sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:497)
09:53:53 at 
java.base/sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:745)
09:53:53 at 
java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:680)
09:53:53 at 
java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
09:53:53 at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:474)
09:53:53 at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:274)
09:53:53 at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:126)
09:53:53 at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:532)
09:53:53 at 
org.apache.kafka.common.network.Selector.poll(Selector.java:467)
09:53:53 at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
09:53:53 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
09:53:53 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
09:53:53 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
09:53:53 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
09:53:53 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
09:53:53 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1210)
09:53:53 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
09:53:53 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1119)
09:53:53 at 
kafka.server.DynamicBrokerReconfigurationTest.kafka$server$DynamicBrokerReconfigurationTest$$awaitInitialPositions(DynamicBrokerReconfigurationTest.scala:997)
09:53:53 at 
kafka.server.DynamicBrokerReconfigurationTest$ConsumerBuilder.build(DynamicBrokerReconfigurationTest.scala:1424)
09:53:53 at 
kafka.server.DynamicBrokerReconfigurationTest.verifySslProduceConsume$1(DynamicBrokerReconfigurationTest.scala:286)
09:53:53 at 
kafka.server.DynamicBrokerReconfigurationTest.testTrustStoreAlter(DynamicBrokerReconfigurationTest.scala:311)
09:53:53 
09:53:53 Caused by:
09:53:53 javax.net.ssl.SSLProtocolException: Handshake message 
sequence violation, 2
09:53:53 at 
java.base/sun.security.ssl.HandshakeStateManager.check(HandshakeStateManager.java:398)
09:53:53 at 
java.base/sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:215)
09:53:53 at 
java.base/sun.security.ssl.Handshaker.processLoop(Handshaker.java:1098)
09:53:53 at 
java.base/sun.security.ssl.Handshaker$1.run(Handshaker.java:1031)
09:53:53 at 
java.base/sun.security.ssl.Handshaker$1.run(Handshaker.java:1028)
09:53:53 at 
java.base/java.security.AccessController.doPrivileged(Native Method)
09:53:53 at 
java.base/sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1540)
09:53:53 at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:399)
09:53:53 at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:479)
09:53:53 ... 17 more
{code}

Also, it seems we might not be cleaning up the consumer because I see a bunch 
of subsequent failures due to lingering consumer heartbeat thread.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-349 Priorities for Source Topics

2018-10-05 Thread Thomas Becker
Colin,
Would you mind sharing your vision for how this looks with multiple consumers? 
I'm still getting my bearings with the new consumer but it's not immediately 
obvious to me how this would work. In particular, it doesn't seem particularly 
easy to know when you are at the high watermark of a topic.

-Tommy

On Mon, 2018-10-01 at 13:43 -0700, Colin McCabe wrote:

Hi all,


I feel like the DISCUSS thread didn't really come to a conclusion, so a vote 
would be premature here.


In particular, I still don't really understand the use-case for this feature.  
Can someone give a concrete scenario where you would need this?  The control 
plane / data plane example that is listed in the KIP doesn't require this 
feature.  You can just have one consumer for the control plane, and one for the 
data plane, and do priority that way.  The discussion feels kind of unfocused 
since we haven't identified even one concrete use-case that needs this feature.


Unfortunately, this is a feature which consumes server-side memory.  We have to 
store the priorities somehow when doing incremental fetch requests.  If we go 
with an int as suggested, then this is at least 4 bytes per partition per 
incremental fetch request.  It also makes it more complex and potentially 
slower to maintain the linked list of partitions in the fetch requests.  Before 
we think about this, I'd like to have a concrete use-case in mind, so that we 
can evaluate the costs versus benefits.


best,

Colin



On Mon, Oct 1, 2018, at 07:47, Dongjin Lee wrote:

Great. +1 (non-binding)


On Mon, Oct 1, 2018 at 4:23 AM Matthias J. Sax 
mailto:matth...@confluent.io>>

wrote:


+1 (binding)


As Dongjin pointed out, the community is working on upcoming 2.1

release, and thus it might take some time until people find time to

follow up on this an vote.



-Matthias


On 9/30/18 11:11 AM, n...@afshartous.com wrote:


On Sep 30, 2018, at 5:16 AM, Dongjin Lee 
mailto:dong...@apache.org>> wrote:


1. Your KIP document

<

https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics

<

https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics


lacks hyperlink to the discussion thread. And I couldn`t find the

discussion thread from the mailing archive.



Hi Dongjin,


There has been a discussion thread.  I added this link as a reference


  https://lists.apache.org/list.html?dev@kafka.apache.org:lte=1M:kip-349




to the KIP-349 page



https://cwiki.apache.org/confluence/display/KAFKA/KIP-349%3A+Priorities+for+Source+Topics

<

https://cwiki.apache.org/confluence/display/KAFKA/KIP-349:+Priorities+for+Source+Topics



Best,

--

  Nick






--

*Dongjin Lee*


*A hitchhiker in the mathematical world.*


*github:  github.com/dongjinleekr

linkedin: kr.linkedin.com/in/dongjinleekr

slideshare:

www.slideshare.net/dongjinleekr

*



This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-05 Thread Jun Rao
Hi, Lucas,

Thanks for the updated KIP. About the name of the metric, since it's tagged
by type=RequestChannel, it's probably fine to use the proposed name. So, +1
for the KIP.

Jun

On Thu, Oct 4, 2018 at 10:31 AM, Lucas Wang  wrote:

> Thanks Jun. I've changed the KIP with the suggested 2 step upgrade.
> Please take a look again when you have time.
>
> Regards,
> Lucas
>
> On Thu, Oct 4, 2018 at 10:06 AM Jun Rao  wrote:
>
> > Hi, Lucas,
> >
> > 200. That's a valid concern. So, we can probably just keep the current
> > name.
> >
> > 201. I am thinking that you would upgrade in the same way as changing
> > inter.broker.listener.name. This requires 2 rounds of rolling restart.
> In
> > the first round, we add the controller endpoint to the listeners w/o
> > setting controller.listener.name. In the second round, every broker sets
> > controller.listener.name. At that point, the controller listener is
> ready
> > in every broker.
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Oct 2, 2018 at 10:38 AM, Lucas Wang 
> wrote:
> >
> > > Thanks for the further comments, Jun.
> > >
> > > 200. Currently in the code base, we have the term of "ControlBatch"
> > related
> > > to
> > > idempotent/transactional producing. Do you think it's a concern for
> > reusing
> > > the term "control"?
> > >
> > > 201. It's not clear to me how it would work by following the same
> > strategy
> > > for "controller.listener.name".
> > > Say the new controller has its "controller.listener.name" set to the
> > value
> > > "CONTROLLER", and broker 1
> > > has picked up this KIP by announcing
> > > "endpoints": [
> > > "CONTROLLER://broker1.example.com:9091",
> > > "INTERNAL://broker1.example.com:9092",
> > > "EXTERNAL://host1.example.com:9093"
> > > ],
> > >
> > > while broker2 has not picked up the change, and is announcing
> > > "endpoints": [
> > > "INTERNAL://broker2.example.com:9092",
> > > "EXTERNAL://host2.example.com:9093"
> > > ],
> > > to support both broker 1 for the new behavior and broker 2 for the old
> > > behavior, it seems the controller must
> > > check their published endpoints. Am I missing something?
> > >
> > > Thanks!
> > > Lucas
> > >
> > > On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:
> > >
> > > > Hi, Lucas,
> > > >
> > > > Sorry for the delay. The updated wiki looks good to me overall. Just
> a
> > > > couple more minor comments.
> > > >
> > > > 200. kafka.network:name=ControllerRequestQueueSize,
> type=RequestChannel:
> > > The
> > > > name ControllerRequestQueueSize gives the impression that it's only
> for
> > > the
> > > > controller broker. Perhaps we can just rename all metrics and configs
> > > from
> > > > controller to control. This indicates that the threads and the queues
> > are
> > > > for the control requests (as oppose to data requests).
> > > >
> > > > 201. ": In this scenario, the controller
> > will
> > > > have the "controller.listener.name" config set to a value like
> > > > "CONTROLLER", however the broker's exposed endpoints do not have an
> > entry
> > > > corresponding to the new listener name. Hence the controller should
> > > > preserve the existing behavior by determining the endpoint using
> > > > *inter-broker-listener-name *value. The end result should be the same
> > > > behavior as today." Currently, the controller makes connections based
> > on
> > > > its local inter.broker.listener.name config without checking the
> > target
> > > > broker's ZK registration. For consistency, perhaps we can just follow
> > the
> > > > same strategy for controller.listener.name. This existing behavior
> > seems
> > > > simpler to understand and has the benefit of catching inconsistent
> > > configs
> > > > across brokers.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang 
> > > wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Sorry to bother you again. Can you please take a look at the wiki
> > again
> > > > > when you have time?
> > > > >
> > > > > Thanks a lot!
> > > > > Lucas
> > > > >
> > > > > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> > > > wrote:
> > > > >
> > > > > > Hi Jun,
> > > > > >
> > > > > > Thanks a lot for the detailed explanation.
> > > > > > I've restored the wiki to a previous version that does not
> require
> > > > config
> > > > > > changes,
> > > > > > and keeps the current behavior with the proposed changes turned
> off
> > > by
> > > > > > default.
> > > > > > I'd appreciate it if you can review it again.
> > > > > >
> > > > > > Thanks!
> > > > > > Lucas
> > > > > >
> > > > > > On Tue, Sep 18, 2018 at 1:48 PM Jun Rao 
> wrote:
> > > > > >
> > > > > >> Hi, Lucas,
> > > > > >>
> > > > > >> When upgrading to a minor release, I think the expectation is
> > that a
> > > > > user
> > > > > >> wouldn't need to make any config changes, other than the usual
> > > > > >> inter.broker.protocol. If we require other config changes during
> > an
> > > > > >> upgrade, then it's 

[jira] [Resolved] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2018-10-05 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5117.
--
   Resolution: Duplicate
 Assignee: Ewen Cheslack-Postava
Fix Version/s: 2.0.0

> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.0.0
>
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-10-05 Thread Guozhang Wang
Thanks for the explanation, that makes sense.


Guozhang


On Mon, Jun 25, 2018 at 2:28 PM, Matthias J. Sax 
wrote:

> The scenario I had I mind was, that KS is started in one thread while a
> second thread has a reference to the object to issue queries.
>
> If a query is issue before the "main thread" started KS, and the "query
> thread" knows that it will eventually get started, it can retry. On the
> other hand, if KS is in state PENDING_SHUTDOWN or DEAD, it is impossible
> to issue any query against it now or in the future and thus the error is
> not retryable.
>
>
> -Matthias
>
> On 6/25/18 10:15 AM, Guozhang Wang wrote:
> > I'm wondering if StreamThreadNotStarted could be merged into
> > StreamThreadNotRunning, because I think users' handling logic for the
> third
> > case would be likely the same as the second. Do you have some scenarios
> > where users may want to handle them differently?
> >
> > Guozhang
> >
> > On Sun, Jun 24, 2018 at 5:25 PM, Matthias J. Sax 
> > wrote:
> >
> >> Sorry to hear! Get well soon!
> >>
> >> It's not a big deal if the KIP stalls a little bit. Feel free to pick it
> >> up again when you find time.
> >>
> > Is `StreamThreadNotRunningException` really an retryable error?
> 
>  When KafkaStream state is REBALANCING, I think it is a retryable
> error.
> 
>  StreamThreadStateStoreProvider#stores() will throw
>  StreamThreadNotRunningException when StreamThread state is not
> >> RUNNING. The
>  user can retry until KafkaStream state is RUNNING.
> >>
> >> I see. If this is the intention, than I would suggest to have two (or
> >> maybe three) different exceptions:
> >>
> >>  - StreamThreadRebalancingException (retryable)
> >>  - StreamThreadNotRunning (not retryable -- thrown if in state
> >> PENDING_SHUTDOWN or DEAD
> >>  - maybe StreamThreadNotStarted (for state CREATED)
> >>
> >> The last one is tricky and could also be merged into one of the first
> >> two, depending if you want to argue that it's retryable or not. (Just
> >> food for though -- not sure what others think.)
> >>
> >>
> >>
> >> -Matthias
> >>
> >> On 6/22/18 8:06 AM, vito jeng wrote:
> >>> Matthias,
> >>>
> >>> Thank you for your assistance.
> >>>
>  what is the status of this KIP?
> >>>
> >>> Unfortunately, there is no further progress.
> >>> About seven weeks ago, I was injured in sports. I had a broken wrist on
> >>> my left wrist.
> >>> Many jobs are affected, including this KIP and implementation.
> >>>
> >>>
>  I just re-read it, and have a couple of follow up comments. Why do we
>  discuss the internal exceptions you want to add? Also, do we really
> need
>  them? Can't we just throw the correct exception directly instead of
>  wrapping it later?
> >>>
> >>> I think you may be right. As I say in the previous:
> >>> "The original idea is that we can distinguish different state store
> >>> exception for different handling. But to be honest, I am not quite sure
> >>> this is necessary. Maybe have some change during implementation."
> >>>
> >>> During the implementation, I also feel we maybe not need wrapper it.
> >>> We can just throw the correctly directly.
> >>>
> >>>
>  Is `StreamThreadNotRunningException` really an retryable error?
> >>>
> >>> When KafkaStream state is REBALANCING, I think it is a retryable error.
> >>>
> >>> StreamThreadStateStoreProvider#stores() will throw
> >>> StreamThreadNotRunningException when StreamThread state is not
> RUNNING.
> >> The
> >>> user can retry until KafkaStream state is RUNNING.
> >>>
> >>>
>  When would we throw an `StateStoreEmptyException`? The semantics is
> >>> unclear to me atm.
> >>>
>  When the state is RUNNING, is `StateStoreClosedException` a retryable
> >>> error?
> >>>
> >>> These two comments will be answered in another mail.
> >>>
> >>>
> >>>
> >>> ---
> >>> Vito
> >>>
> >>> On Mon, Jun 11, 2018 at 8:12 AM, Matthias J. Sax <
> matth...@confluent.io>
> >>> wrote:
> >>>
>  Vito,
> 
>  what is the status of this KIP?
> 
>  I just re-read it, and have a couple of follow up comments. Why do we
>  discuss the internal exceptions you want to add? Also, do we really
> need
>  them? Can't we just throw the correct exception directly instead of
>  wrapping it later?
> 
>  When would we throw an `StateStoreEmptyException`? The semantics is
>  unclear to me atm.
> 
>  Is `StreamThreadNotRunningException` really an retryable error?
> 
>  When the state is RUNNING, is `StateStoreClosedException` a retryable
>  error?
> 
>  One more nits: ReadOnlyWindowStore got a new method #fetch(K key, long
>  time); that should be added
> 
> 
>  Overall I like the KIP but some details are still unclear. Maybe it
>  might help if you open an PR in parallel?
> 
> 
>  -Matthias
> 
>  On 4/24/18 8:18 AM, vito jeng wrote:
> > Hi, Guozhang,
> >
> > Thanks for the comment!
> >
> >
> >
> 

[jira] [Resolved] (KAFKA-7266) Fix MetricsTest test flakiness

2018-10-05 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7266.

   Resolution: Fixed
Fix Version/s: 2.1.0

> Fix MetricsTest test flakiness
> --
>
> Key: KAFKA-7266
> URL: https://issues.apache.org/jira/browse/KAFKA-7266
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
> Fix For: 2.1.0, 2.2.0
>
>
> The test `kafka.api.MetricsTest.testMetrics` has been failing intermittently 
> in kafka builds (recent proof: 
> https://github.com/apache/kafka/pull/5436#issuecomment-409683955)
> The particular failure is in the `MessageConversionsTimeMs` metric assertion -
> {code}
> java.lang.AssertionError: Message conversion time not recorded 0.0
> {code}
> There has been work done previously 
> (https://github.com/apache/kafka/pull/4681) to combat the flakiness of the 
> test and while it has improved it, the test still fails sometimes.
> h3. Solution
> On my machine, the test failed 5 times out of 25 runs. Increasing the record 
> size and using compression should slow down message conversion enough to have 
> it be above 1ms. Locally this test has not failed in 200 runs and counting 
> with those changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Replacing EasyMock with Mockito in Kafka

2018-10-05 Thread Ismael Juma
So far, it seems like people are in favour. If no objections are presented
in the next couple of days, we will go ahead with the change.

Ismael

On Sun, 30 Sep 2018, 20:32 Ismael Juma,  wrote:

> Hi all,
>
> As described in KAFKA-7438
> , EasyMock's
> development has stagnated. This presents a number of issues:
>
> 1. Blocks us from running tests with newer Java versions, which is a
> frequent occurrence give the new Java release cadence. It is the main
> blocker in switching Jenkins from Java 10 to Java 11 at the moment.
> 2. Integration with newer testing libraries like JUnit 5 is slow to appear
> (if it appears at all).
> 3. No API improvements. Mockito started as an EasyMock fork, but has
> continued to evolve and, in my opinion, it's more intuitive now.
>
> I think we should switch to Mockito for new tests and to incrementally
> migrate the existing ones as time allows. To make the proposal concrete, I
> went ahead and converted all the tests in the `clients` module:
>
> https://github.com/apache/kafka/pull/5691
>
> I think the updated tests are nicely readable. I also removed PowerMock
> from the `clients` tests as we didn't really need it and its development
> has also stagnated a few months ago. I think we can easily remove PowerMock
> elsewhere with the exception of `Connect` where we may need to keep it for
> a while.
>
> Let me know your thoughts. Aside from the general future direction, I'd
> like to get the PR for KAFKA-7439 reviewed and merged soonish as merge
> conflicts will creep in quickly.
>
> Ismael
>


Build failed in Jenkins: kafka-trunk-jdk8 #3070

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7476: Fix Date-based types in SchemaProjector

--
[...truncated 2.72 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-2.1-jdk8 #3

2018-10-05 Thread Apache Jenkins Server
See 

--
[...truncated 417.35 KB...]

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED


Build failed in Jenkins: kafka-0.11.0-jdk7 #405

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7476: Fix Date-based types in SchemaProjector

--
[...truncated 152.01 KB...]
kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeCluster STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeCluster PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeNonExistingTopic 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeNonExistingTopic 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeAndAlterConfigs 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII STARTED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII STARTED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure SKIPPED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-10-05 Thread vito jeng
Matthias,

> Sorry to hear! Get well soon!

I'm back! :)

Base on many suggestion and discussion, I already rewrite the KIP.
Please take a look:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-216%3A+IQ+should+throw+different+exceptions+for+different+errors


> I see. If this is the intention, than I would suggest to have two (or
>  maybe three) different exceptions:
>  - StreamThreadRebalancingException (retryable)
>  - StreamThreadNotRunning (not retryable -- thrown if in state
> PENDING_SHUTDOWN or DEAD
>  - maybe StreamThreadNotStarted (for state CREATED)
> The last one is tricky and could also be merged into one of the first
> two, depending if you want to argue that it's retryable or not. (Just
> food for though -- not sure what others think.)

I agree with your suggestion, really helpful.


> Overall I like the KIP but some details are still unclear. Maybe it
> might help if you open an PR in parallel?

It would be a lot of help for further discussion.
I will complete the implementation in a few days and then I will create a
PR.


---
Vito


On Mon, Jun 25, 2018 at 8:25 AM Matthias J. Sax 
wrote:

> Sorry to hear! Get well soon!
>
> It's not a big deal if the KIP stalls a little bit. Feel free to pick it
> up again when you find time.
>
> >>> Is `StreamThreadNotRunningException` really an retryable error?
> >>
> >> When KafkaStream state is REBALANCING, I think it is a retryable error.
> >>
> >> StreamThreadStateStoreProvider#stores() will throw
> >> StreamThreadNotRunningException when StreamThread state is not RUNNING.
> The
> >> user can retry until KafkaStream state is RUNNING.
>
> I see. If this is the intention, than I would suggest to have two (or
> maybe three) different exceptions:
>
>  - StreamThreadRebalancingException (retryable)
>  - StreamThreadNotRunning (not retryable -- thrown if in state
> PENDING_SHUTDOWN or DEAD
>  - maybe StreamThreadNotStarted (for state CREATED)
>
> The last one is tricky and could also be merged into one of the first
> two, depending if you want to argue that it's retryable or not. (Just
> food for though -- not sure what others think.)
>
>
>
> -Matthias
>
> On 6/22/18 8:06 AM, vito jeng wrote:
> > Matthias,
> >
> > Thank you for your assistance.
> >
> >> what is the status of this KIP?
> >
> > Unfortunately, there is no further progress.
> > About seven weeks ago, I was injured in sports. I had a broken wrist on
> > my left wrist.
> > Many jobs are affected, including this KIP and implementation.
> >
> >
> >> I just re-read it, and have a couple of follow up comments. Why do we
> >> discuss the internal exceptions you want to add? Also, do we really need
> >> them? Can't we just throw the correct exception directly instead of
> >> wrapping it later?
> >
> > I think you may be right. As I say in the previous:
> > "The original idea is that we can distinguish different state store
> > exception for different handling. But to be honest, I am not quite sure
> > this is necessary. Maybe have some change during implementation."
> >
> > During the implementation, I also feel we maybe not need wrapper it.
> > We can just throw the correctly directly.
> >
> >
> >> Is `StreamThreadNotRunningException` really an retryable error?
> >
> > When KafkaStream state is REBALANCING, I think it is a retryable error.
> >
> > StreamThreadStateStoreProvider#stores() will throw
> > StreamThreadNotRunningException when StreamThread state is not RUNNING.
> The
> > user can retry until KafkaStream state is RUNNING.
> >
> >
> >> When would we throw an `StateStoreEmptyException`? The semantics is
> > unclear to me atm.
> >
> >> When the state is RUNNING, is `StateStoreClosedException` a retryable
> > error?
> >
> > These two comments will be answered in another mail.
> >
> >
> >
> > ---
> > Vito
> >
> > On Mon, Jun 11, 2018 at 8:12 AM, Matthias J. Sax 
> > wrote:
> >
> >> Vito,
> >>
> >> what is the status of this KIP?
> >>
> >> I just re-read it, and have a couple of follow up comments. Why do we
> >> discuss the internal exceptions you want to add? Also, do we really need
> >> them? Can't we just throw the correct exception directly instead of
> >> wrapping it later?
> >>
> >> When would we throw an `StateStoreEmptyException`? The semantics is
> >> unclear to me atm.
> >>
> >> Is `StreamThreadNotRunningException` really an retryable error?
> >>
> >> When the state is RUNNING, is `StateStoreClosedException` a retryable
> >> error?
> >>
> >> One more nits: ReadOnlyWindowStore got a new method #fetch(K key, long
> >> time); that should be added
> >>
> >>
> >> Overall I like the KIP but some details are still unclear. Maybe it
> >> might help if you open an PR in parallel?
> >>
> >>
> >> -Matthias
> >>
> >> On 4/24/18 8:18 AM, vito jeng wrote:
> >>> Hi, Guozhang,
> >>>
> >>> Thanks for the comment!
> >>>
> >>>
> >>>
> >>> Hi, Bill,
> >>>
> >>> I'll try to make some update to make the KIP better.
> >>>
> >>> Thanks for the comment!
> >>>
> >>>
> >>> ---
> >>> Vito
> >>>
> >>> On 

Build failed in Jenkins: kafka-2.0-jdk8 #162

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7476: Fix Date-based types in SchemaProjector

--
[...truncated 434.11 KB...]

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED


Build failed in Jenkins: kafka-2.1-jdk8 #2

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7476: Fix Date-based types in SchemaProjector

--
[...truncated 418.04 KB...]
kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNoRack PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV4WithNullRack PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV3 PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable STARTED

kafka.integration.UncleanLeaderElectionTest > 
testTopicUncleanLeaderElectionEnable PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex FAILED
kafka.tools.MirrorMaker$NoRecordsException
at 
kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:483)
at 
kafka.tools.MirrorMakerIntegrationTest$$anonfun$testCommaSeparatedRegex$1.apply$mcZ$sp(MirrorMakerIntegrationTest.scala:92)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:738)
at 
kafka.tools.MirrorMakerIntegrationTest.testCommaSeparatedRegex(MirrorMakerIntegrationTest.scala:90)

kafka.tools.MirrorMakerIntegrationTest > 
testCommitOffsetsRemoveNonExistentTopics STARTED

kafka.tools.MirrorMakerIntegrationTest > 
testCommitOffsetsRemoveNonExistentTopics PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED


[jira] [Created] (KAFKA-7484) Fix test SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()

2018-10-05 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-7484:
---

 Summary: Fix test 
SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown()
 Key: KAFKA-7484
 URL: https://issues.apache.org/jira/browse/KAFKA-7484
 Project: Kafka
  Issue Type: Improvement
Reporter: Dong Lin


The test 
SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown() fails 
in the 2.1.0 branch Jekin job. See 
[https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/.|https://builds.apache.org/job/kafka-2.1-jdk8/1/testReport/junit/org.apache.kafka.streams.integration/SuppressionDurabilityIntegrationTest/shouldRecoverBufferAfterShutdown_1__eosEnabled_true_/]

Here is the stack trace: 

java.lang.AssertionError: Condition not met within timeout 3. Did not 
receive all 3 records from topic output-raw-shouldRecoverBufferAfterShutdown at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:278) at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:462)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinRecordsReceived(IntegrationTestUtils.java:343)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.verifyKeyValueTimestamps(IntegrationTestUtils.java:543)
 at 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.verifyOutput(SuppressionDurabilityIntegrationTest.java:239)
 at 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest.shouldRecoverBufferAfterShutdown(SuppressionDurabilityIntegrationTest.java:206)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-0.10.2-jdk7 #235

2018-10-05 Thread Apache Jenkins Server
See 


Changes:

[me] KAFKA-7476: Fix Date-based types in SchemaProjector

--
[...truncated 1.31 MB...]

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldFlushStoreWhenFlushIntervalHasLapsed STARTED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldFlushStoreWhenFlushIntervalHasLapsed PASSED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldUpdateStateWithReceivedRecordsForAllTopicPartition STARTED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldUpdateStateWithReceivedRecordsForAllTopicPartition PASSED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldAssignPartitionsToConsumer STARTED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldAssignPartitionsToConsumer PASSED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldCloseStateMaintainer STARTED

org.apache.kafka.streams.processor.internals.StateConsumerTest > 
shouldCloseStateMaintainer PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotNullPointerWhenStandbyTasksAssignedAndNoStateStoresForTopology STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotNullPointerWhenStandbyTasksAssignedAndNoStateStoresForTopology PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldCloseActiveTasksThatAreAssignedToThisStreamThreadButAssignmentHasChangedBeforeCreatingNewTasks
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldCloseActiveTasksThatAreAssignedToThisStreamThreadButAssignmentHasChangedBeforeCreatingNewTasks
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringFlushStateWhileSuspendingState
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringFlushStateWhileSuspendingState
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldInitializeRestoreConsumerWithOffsetsFromStandbyTasks STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldInitializeRestoreConsumerWithOffsetsFromStandbyTasks PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnStandbyTaskCloseForUnassignedSuspendedStandbyTask
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnStandbyTaskCloseForUnassignedSuspendedStandbyTask
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testHandingOverTaskFromOneToAnotherThread STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testHandingOverTaskFromOneToAnotherThread PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testStateChangeStartClose STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testStateChangeStartClose PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenAnExceptionOccursOnTaskCloseDuringShutdown 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenAnExceptionOccursOnTaskCloseDuringShutdown PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringCloseTopologyWhenSuspendingState
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringCloseTopologyWhenSuspendingState
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnTaskCloseForUnassignedSuspendedTask STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnTaskCloseForUnassignedSuspendedTask PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMetrics 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMetrics 
PASSED


Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-05 Thread Chia-Ping Tsai
> However, it's a good hint, that `AutoClosable#close()` declares `throws
> Exception` and thus, it seems to be a backward incompatible change.
> Hence, I am not sure if we can actually move forward easily with this KIP.

It is legal to remove declaration of "throwing Exception" from a class which 
extend AutoClosable, so method signature won't be changed. Hence it should be 
fine to backward compatibility.

On 2018/09/30 20:19:57, "Matthias J. Sax"  wrote: 
> Closeable is part of `java.io` while AutoClosable is part of
> `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
> out that `Closable#close()` must be idempotent while
> `AutoClosable#close()` can have side effects.
> 
> Thus, I am not sure atm which one suits better.
> 
> However, it's a good hint, that `AutoClosable#close()` declares `throws
> Exception` and thus, it seems to be a backward incompatible change.
> Hence, I am not sure if we can actually move forward easily with this KIP.
> 
> Nit: `RecordCollectorImpl` is an internal class that implements
> `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
> 
> 
> -Matthias
> 
> 
> On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
> >> (Although I am not quite sure
> >> when one is more desirable than the other)
> > 
> > Most kafka's classes implementing Closeable/AutoCloseable doesn't throw 
> > checked exception in close() method. Perhaps we should have a 
> > "KafkaCloseable" interface which has a close() method without throwing any 
> > checked exception...
> > 
> > On 2018/09/27 19:11:20, Yishun Guan  wrote: 
> >> Hi All,
> >>
> >> Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
> >> should be implementing Closeable as well (Although I am not quite sure
> >> when one is more desirable than the other), also I just looked through
> >> your list - these are some great additions, I will add them to the
> >> list.
> >>
> >> Thanks,
> >> Yishun
> >> On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee  wrote:
> >>>
> >>> Hi Yishun,
> >>>
> >>> Thank you for your great KIP. In fact, I have also encountered the cases
> >>> where Autoclosable is so desired several times! Let me inspect more
> >>> candidate classes as well.
> >>>
> >>> +1. I also refined your KIP a little bit.
> >>>
> >>> Best,
> >>> Dongjin
> >>>
> >>> On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai  
> >>> wrote:
> >>>
>  hi Yishun
> 
>  Thanks for nice KIP!
> 
>  Q1)
>  Why VerifiableProducer extend Closeable rather than AutoCloseable?
> 
>  Q2)
>  I grep project and then noticed there are other close methods but do not
>  implement AutoCloseable.
>  For example:
>  1) WorkerConnector
>  2) MemoryRecordsBuilder
>  3) MetricsReporter
>  4) ExpiringCredentialRefreshingLogin
>  5) KafkaChannel
>  6) ConsumerInterceptor
>  7) SelectorMetrics
>  8) HeartbeatThread
> 
>  Cheers,
>  Chia-Ping
> 
> 
>  On 2018/09/26 23:44:31, Yishun Guan  wrote:
> > Hi All,
> >
> > Here is a trivial KIP:
> >
>  https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> >
> > Suggestions are welcome.
> >
> > Thanks,
> > Yishun
> >
> 
> >>>
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>>
> >>> *github:  github.com/dongjinleekr
> >>> linkedin: kr.linkedin.com/in/dongjinleekr
> >>> slideshare:
> >>> www.slideshare.net/dongjinleekr
> >>> *
> >>
> 
> 


Jenkins build is back to normal : kafka-0.10.1-jdk7 #141

2018-10-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.1-jdk8 #1

2018-10-05 Thread Apache Jenkins Server
See 

--
[...truncated 2.51 MB...]
org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeReduced STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3069

2018-10-05 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 3209, done.
remote: Counting objects:   0% (1/3209)   remote: Counting objects:   
1% (33/3209)   remote: Counting objects:   2% (65/3209)   
remote: Counting objects:   3% (97/3209)   remote: Counting objects:   
4% (129/3209)   remote: Counting objects:   5% (161/3209)   
remote: Counting objects:   6% (193/3209)   remote: Counting objects:   
7% (225/3209)   remote: Counting objects:   8% (257/3209)   
remote: Counting objects:   9% (289/3209)   remote: Counting objects:  
10% (321/3209)   remote: Counting objects:  11% (353/3209)   
remote: Counting objects:  12% (386/3209)   remote: Counting objects:  
13% (418/3209)   remote: Counting objects:  14% (450/3209)   
remote: Counting objects:  15% (482/3209)   remote: Counting objects:  
16% (514/3209)   remote: Counting objects:  17% (546/3209)   
remote: Counting objects:  18% (578/3209)   remote: Counting objects:  
19% (610/3209)   remote: Counting objects:  20% (642/3209)   
remote: Counting objects:  21% (674/3209)   remote: Counting objects:  
22% (706/3209)   remote: Counting objects:  23% (739/3209)   
remote: Counting objects:  24% (771/3209)   remote: Counting objects:  
25% (803/3209)   remote: Counting objects:  26% (835/3209)   
remote: Counting objects:  27% (867/3209)   remote: Counting objects:  
28% (899/3209)   remote: Counting objects:  29% (931/3209)   
remote: Counting objects:  30% (963/3209)   remote: Counting objects:  
31% (995/3209)   remote: Counting objects:  32% (1027/3209)   
remote: Counting objects:  33% (1059/3209)   remote: Counting objects:  
34% (1092/3209)   remote: Counting objects:  35% (1124/3209)   
remote: Counting objects:  36% (1156/3209)   remote: Counting objects:  
37% (1188/3209)   remote: Counting objects:  38% (1220/3209)   
remote: Counting objects:  39% (1252/3209)   remote: Counting objects:  
40% (1284/3209)   remote: Counting objects:  41% (1316/3209)   
remote: Counting objects:  42% (1348/3209)   remote: Counting objects:  
43% (1380/3209)   remote: Counting objects:  44% (1412/3209)   
remote: Counting objects:  45% (1445/3209)   remote: Counting objects:  
46% (1477/3209)   remote: Counting objects:  47% (1509/3209)   
remote: Counting objects:  48% (1541/3209)   remote: Counting objects:  
49% (1573/3209)   remote: Counting objects:  50% (1605/3209)   
remote: Counting objects:  51% (1637/3209)   remote: Counting objects:  
52% (1669/3209)   remote: Counting objects:  53% (1701/3209)   
remote: Counting objects:  54% (1733/3209)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-2.0-jdk8 #161

2018-10-05 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H29 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
remote: Enumerating objects: 3087, done.
remote: Counting objects:   0% (1/3087)   remote: Counting objects:   
1% (31/3087)   remote: Counting objects:   2% (62/3087)   
remote: Counting objects:   3% (93/3087)   remote: Counting objects:   
4% (124/3087)   remote: Counting objects:   5% (155/3087)   
remote: Counting objects:   6% (186/3087)   remote: Counting objects:   
7% (217/3087)   remote: Counting objects:   8% (247/3087)   
remote: Counting objects:   9% (278/3087)   remote: Counting objects:  
10% (309/3087)   remote: Counting objects:  11% (340/3087)   
remote: Counting objects:  12% (371/3087)   remote: Counting objects:  
13% (402/3087)   remote: Counting objects:  14% (433/3087)   
remote: Counting objects:  15% (464/3087)   remote: Counting objects:  
16% (494/3087)   remote: Counting objects:  17% (525/3087)   
remote: Counting objects:  18% (556/3087)   remote: Counting objects:  
19% (587/3087)   remote: Counting objects:  20% (618/3087)   
remote: Counting objects:  21% (649/3087)   remote: Counting objects:  
22% (680/3087)   remote: Counting objects:  23% (711/3087)   
remote: Counting objects:  24% (741/3087)   remote: Counting objects:  
25% (772/3087)   remote: Counting objects:  26% (803/3087)   
remote: Counting objects:  27% (834/3087)   remote: Counting objects:  
28% (865/3087)   remote: Counting objects:  29% (896/3087)   
remote: Counting objects:  30% (927/3087)   remote: Counting objects:  
31% (957/3087)   remote: Counting objects:  32% (988/3087)   
remote: Counting objects:  33% (1019/3087)   remote: Counting objects:  
34% (1050/3087)   remote: Counting objects:  35% (1081/3087)   
remote: Counting objects:  36% (1112/3087)   remote: Counting objects:  
37% (1143/3087)   remote: Counting objects:  38% (1174/3087)   
remote: Counting objects:  39% (1204/3087)   remote: Counting objects:  
40% (1235/3087)   remote: Counting objects:  41% (1266/3087)   
remote: Counting objects:  42% (1297/3087)   remote: Counting objects:  
43% (1328/3087)   remote: Counting objects:  44% (1359/3087)   
remote: Counting objects:  45% (1390/3087)   remote: Counting objects:  
46% (1421/3087)   remote: Counting objects:  47% (1451/3087)   
remote: Counting objects:  48% (1482/3087)   remote: Counting objects:  
49% (1513/3087)   remote: Counting objects:  50% (1544/3087)   
remote: Counting objects:  51% (1575/3087)   remote: Counting objects:  
52% (1606/3087)   remote: Counting objects:  53% (1637/3087)   
remote: Counting objects:  54% (1667/3087)   remote: Counting objects:  
55% (1698/3087)   

Jenkins build is back to normal : kafka-1.1-jdk7 #217

2018-10-05 Thread Apache Jenkins Server
See